Background
The core of Quick Base is a large Microsoft Visual C++ project (technically it’s a solution with multiple projects). Our build/deploy/test/upload-artifact cycle was 90 minutes. It was automated. Not bad for a 20-year-old code base, right? Nah, we can do better. We can do it in 2 minutes!
At least, that’s how we pitched it. You can imagine the response. Besides the obvious “that’s impossible!” sentiments, we were asked “Why? It takes us longer than that to fully QA the resulting artifact, so what’s the point?” And so, our journey began.
If there’s one thing I’ve learned over the years, it’s that the hard part isn’t the technology, it’s the human equation. Here, we’d lived with a very slow build for years (because the belief was that we’d deprecate the old stack in favor of the re-architecture that was in progress). Once we’d re-focused our efforts to iterating on the existing stack, we knew things had to change. We were operating using Agile methodologies (both scrum and Kanban are in use) but the tools weren’t properly supporting us. A few engineers close to the build knew there was low-hanging fruit; what better way to demonstrate “yes, we can!” and gather excitement than to make significant progress with relatively little effort.
Organizationally, we were now better-suited to support these kinds of improvements. We have a Site Reliability Engineering team that consists of both Ops and Dev. Together, we started to break down the problem. We deconstructed the long Jenkins job into this diagram:
Now we knew where to focus for the biggest gains.
Our First Big Win
The “Tools Nexus Deploy” was literally just Maven uploading a 250-MB zip file to Nexus from servers in the office in Cambridge, MA to our Nexus server in AWS (Oregon). It definitely shouldn’t take that long to upload; we have a very fat Internet pipe in the office. We did packet traces using WireShark and other network tests to try and determine the cause. We didn’t uncover anything.
So, let’s break down the problem and isolate the issue. Is the network in the office OK? AWS? Is the Nexus server slow? Here’s some of what we did:
- Download data directly from Nexus using wget (remove Maven from the equation)
- Upload directly to Nexus using wget (ditto)
- Do the above from the office servers (is it the server network?)
- Do the above from office workstations (is it the entire Cambridge network?)
- Do the above from EC2 instances in AWS (Oregon) (remove Cambridge from the equation)
- Try a (much) newer version of Windows that hasn’t been hardened (maybe issues with TCP windowing and other high-latency improvements)
- Do the above from linux instead of Windows (remove Windows from the equation)
When we switched from Windows to linux, we stood back in disbelief. The upload was now taking 90 seconds instead of 22 minutes. We found that Maven on Windows has extremely poor network performance. We temporarily switched to Maven on linux by splitting up the build job to have a separate upload job that was tied to the Jenkins master node (running linux).
Our Second Big Win
The next thing we tackled was the “PD CI-Test” group. These are just TestNG Java tests that hit the Quick Base API to do some automated testing. We found one simple area to improve: add test data using bulk import instead of per-record inserts. Since this was in setup code, the several-second difference added up to … drum roll … 18 minutes!
Number Three
There was still lots of room for improvement in the “PD CI-Test” group, so we found one other quick win. After we’d encountered the Maven slowness, we started to question the speed of Ant on Windows. The server was only running at 20% CPU when the tests were running, so we suspected something wasn’t going as fast as it could. Switching our tests to be called via Gradle instead of Ant saved us another 12 minutes!
Assessing Where We Are Now
In 2 months, our diagram looked like this:
You can bet that was exciting! Now we have the momentum and people believe it can be done.
We’ve continued to make further improvements such as moving from the aging hardware in the Cambridge server room to AWS using the Jenkins EC2 plugin (and then taking advantage of the C5 instance types (which boot our Windows AMI in 4 minutes instead of 10) by building our own version of the plugin and submitting a PR for it here). Build times are currently averaging 26 minutes and we’ve got items on the roadmap (including moving to Jenkins pipeline so we can easily take advantage of parallelism) that should get us closer to 15. After that, we run into limitations of the MSVC++ linker that does a few things single-threaded; one of our projects is quite large and produces a single binary. The next steps there include breaking that project up (e.g. use libraries). That will take more effort, so we’ve left that for last.
Will we ever get to 2 minutes? Who’s to say? The purpose of setting the goal that low was to fire up people’s imaginations. And it has.