Posts by Ben

I am a Principal Platform Engineer with over 20 years of experience in the field. I joined Quick Base in 2006 as the sole Operations Engineer and am currently a member of the Site Reliability Engineering team.

Death by 30,000 Files

One of my colleagues humorously referred to this post as “how to delete a thousand files.”  We all had a good laugh.  And it only served to highlight how something that seems so simple is complex enough to deserve a detailed post about it!  Fear of the unknown is a powerful motivator.  Let’s break through that together …

Who Cares?

Our largest build artifact was 256 MB and 30,000 files.  Nothing in today’s world.  No big deal, right?  Sure, without context … but let’s put some color on this black & white picture:  20,000 – 66% – of those files were cruft. That eats up small amounts of time in lots of ways that really add up:

  • Cloning the Git repo and checking out the branch (CI builds)
  • Zipping the files to create the artifact
  • Unzipping the files to deploy
  • Uploading / downloading the artifact to/from Nexus
  • At 30+ builds a day, that’s 218 GB per month to store in Nexus

And even if you ignore all of that, reducing the cognitive load of your project will increase the velocity and quality of your teams.

Cheaply Determining if You Have a Problem

So, how did we know we had 20,000 crufty files without authorizing someone to do the work to investigate?  In our case, it was a combination of any of the following tactics you can use:

  • Someone who’s been around a long time and has a good gut feel
  • Whitespace programs
  • Hackathons
  • “0” point research spikes (it took nearly no effort to do the Splunk query below to find how many files are in use and compare against how many files are on disk)
  • No one knows what half of the files are

In our case, the www directory contained the majority of the number of files in the artifact. That meant we could look at our IIS access logs to see what was actually in use and trim out the rest.

Safely Analyzing Mountains of Data

When I solve these types of problems, I find I work best with grep, sed, awk, and xargs instead of writing, testing, and debugging a larger script that covers all cases.  It was also key to have Splunk at my side — I needed to reduce 60 days worth of access logs (billions of lines and hundreds of gigs of data).

I let Splunk do the first-step heavy lifting. We have a Splunk index for our IIS logs which automatically extracts the fields so I can query them. In the search below, I select that index, use the W3SVC1 logs (the main website), filter for GET (other verbs like OPTIONS were causing false positives), filter for HTTP status codes (I especially don’t care about 404’s), and then remove any irrelevant paths.  I grouped by cs_uri_stem (after forcing everything to lower-case to prevent dupes based on case differences) which gave me a list of active files and how many hits each file had.

 index=qb-prod-iis (source="D:\\IISLogs\\W3SVC1\\*" AND cs_method="GET" AND sc_status IN (20*, 304) AND NOT cs_uri_stem IN ("/foo/*", "/bar/*")) | eval cs_uri_stem=lower(cs_uri_stem) | stats count by cs_uri_stem | sort count 

I downloaded those results to a CSV file that was all of 320 KB covering 60 days’ worth of logs.  That file had lines that looked like:

"/css/themes/classic/images/icons/icon_eye_gray.png",8918397

I wrote this little utility script to help me use that file to:

  1. Determine which files were actively being accessed
  2. List the files in Git that were not accessed
#!/bin/bash

# In this script we convert everything to lower-case to simplify things.
# We'll later need to conver back to the original case in order to
# remove dead files from Git

# Get latest Splunk export file
CSV=`ls -lt ~/Downloads/15*.csv|head -1 | awk '{print $10}'`

# Our CDN assets are placed in /res by the build system as duplicates
# of files from elsewhere in Git (they are not checked in).  This
# requires us to fold away the CDN path to get at the real file.
# For example:
#   /res/xxx/css/foo.css is the same as /css/foo.css
# Here, get a unique list of all non-CDN assets that were accessed
grep -v /res/ $CSV | awk -F, '{print $1}' | sed 's/"//g'|tr '[:upper:]' '[:lower:]'|sort|uniq > /tmp/files.1

# Now, Get a unique list of all CDN assets that were accessed and
# trim out the /res/xxx path
egrep -E "/res/[a-z,0-9]+-" $CSV | cut -c 19- | awk -F, '{print $1}' | sed 's/"//' | tr '[:upper:]' '[:lower:]' | sort | uniq > /tmp/files.2

# Get the union of the two file sets above.  This is our list of
# active files
cat /tmp/files.1 /tmp/files.2 | sort | uniq > /tmp/files.active

# Get a list of all files in Git
pushd ~/git/QuickBase/www
find . -type f | sed 's/^\.//' | tr '[:upper:]' '[:lower:]' | sort > /tmp/files.all
popd

# Finally, diff the active file list with what's in Git
# Files that are being accessed but not in Git will start with ""
diff /tmp/files.active /tmp/files.all

This was a large list; out of the ~15,000 files in www checked into Git, only ~1,500 were in use.  The list itself was a cognitive load problem!  I also didn’t want my GitHub PR’s to be so large that they didn’t load or were impossible to review.  I categorized the list into about 10 parts and created a JIRA story for each.  Using grep, I could execute on various pieces of the list.  For example, this would give me the files in /i/ that started with the letters a through g.

./dodiff.sh | grep '> /i/' | sed 's#> /i/##' | egrep -e '^[a-g].*' > /tmp/foo

I would use the following to double-check that I’m not removing something important.  It uses the contents of /tmp/foo (one file per line) by transforming it into a single line regular expression.  So if the file contained

one
two
three

The result of the expression inside the backticks (in the code below) would be

one|two|three
egrep -r `cat /tmp/foo | xargs | sed 's/ /|/g'` .

Removing the Files

When I was ready to start removing files, I needed to convert the file case back to what’s on disk, so I used the power of xargs to take the original list and run grep (once per file) to find the original entry in /tmp/files.all:

cat /tmp/foo | xargs -xn1 -I % egrep -Ei "^%$" /tmp/files.all > /tmp/foo2

And now I can use xargs again to automate the git command

git rm `cat /tmp/foo2 | xargs`

Results

The artifact has gone from 256 MB to 174 MB but more importantly it’s gone from 29,000 files to 12,600.  This means:

  • Downloads are 32% faster (previous baselines vary by location but were anywhere from 20 to 120 seconds)
  • Unzips are 50% faster (baselines also vary but were 25 seconds in prod and are now 12 seconds)
  • The CI build is 1 minute faster (zip/unzip, Git checkout, and Nexus upload speedups)
  • Our local workspaces have gone from ~32,500 files to ~21,000 files
  • The source is back to being browsable in GitHub because it can now do complete directory listings

Impact

There are intangible benefits such as cognitive load, attack surface, reduced complexity, and simplifying future projects where we break apart this old monolith.  But there are hard numbers, too!

For the purposes of this exercise, assume $200k annually for a fully-loaded SWE, which is about $96/hour (200000 / 52 / 40).

Using $96/hour, the average team saves 60 person-minutes/day (6 people * 5 builds per day per team * (1 minute per build + 1 minute for download & install), or $2,000/mo or $24k annually.

How Does This Help You?

Before I answer that, I’ll backfill a little more history.  Cleaning up the www directory has been something we’ve talked about for many years.  And everyone was literally afraid to do it.  I always kept saying I’ll just do it, or help someone do it.  But it was never a “funded” project.

One of my colleagues (in concert with others, including our chief architect) and fellow blog authors, Ashish, decided to track all the things rattling around in our head as new JIRA stories in a “tech debt” backlog project so we could really start to see what’s out there.  This was one item on that list.  Having it as a real item in JIRA was the first step towards making it possible.

After I’d completed one of my stories, I was able to take that now-tangible tech debt story and break it down into something manageable.  After spending maybe an hour, I was able to quantify the problem (15,000 files, only 1,500 of which were in use) which lent credence to the effort.  As I mentioned, I broke it down into 10 stories, estimated them, and was able to show that this was an achievable goal.

No one, and I mean no one, is unhappy that I spent time doing this.  We often spend a lot of time not even trying to do what’s right because we believe we’re not authorized.  It’s crucial that we, as engineers, work closely with the rest of the business to collaborate, build relationships, and increase trust so that we can have the open conversations that lead to working on stories that are difficult to tie back to customer value.  But remember, there’s a lot of customer value in being efficient!

My advice:

  • Just spend a little time quantifying a problem and its solution so you can have an informed discussion with your team about working on it
  • If you’re passionate about something, use that to your advantage!  Instead of complaining about a problem, use your passion to solve the heck out of it!  People will thank you for it.
  • Break the problem down into manageable units.
    • Get those units into the backlog as stories and talk about them during your grooming sessions.  Make them real.
  • Don’t boil the ocean; keep the problem clear and defined
  • Use the 80/20 rule.  I didn’t clean everything up.  I cleaned up the easy stuff.  And that still has incredible impact.
  • Find allies / supporters on your team and other teams.  They increase your strength.
  • Use the right tools for the job.  It would have been easy for me to build a long, complex script but in the end, it’s not necessary.  It would have taken longer and would have been more error prone.  Tackling the problem in groups helped me stay focused and make progress in chunks.

Invisible Outages

The Cloud can sound so fancy and wonderful especially if you haven’t worked with it much.  It may even sound like a Unicorn.  You may question whether auto-scaling and auto-healing really works.  I’ve been there.  And now I reside in the promised land.  Here, I share a story with you to make it more tangible and real for those of you who haven’t done this yet.

About 18 months ago, Quick Base launched a Webhooks feature.  At that time, we had no cloud infrastructure to speak of.  Everything in production was running as a monolithic platform in a set of dedicated hosting facilities.  This feature gave us a clear opportunity to build it in a way that utilized “The Cloud.”  At that time, we’d learned enough about doing things in AWS to know we wanted to use a “bakery” model (where everything needed to run the application is embedded in the AMI) instead of a “frying” model (where the code and its dependencies are pulled down into a generic AMI at boot-time).  We’d seen that the frying model relied too heavily on external services during the boot phase and thus was unreliable and slow.

Combining the power of Jenkins, Chef, Packer, Nexus, and various AWS services, we put together our first fully-automated build-test-deploy pipeline.

Untitled Diagram (1)

The diagram above is a simplified version of our bakery, as orchestrated by Jenkins.  Gradle is responsible for building, testing, and publishing the artifact (to Nexus) that comprises the service.  Packer, Chef, and AWS are combined to place that artifact into an OS image (AMI) that will boot that version of the service when launched.  That enabled us to deploy immutable infrastructure that was built entirely from code — 100% automated.  Servers are treated as cattle, not pets.  This buys us:

  • Traceability: since all changes must be done as code, we know who made them, when they were made, who reviewed them, that they were tested, and when the change was deployed (huge benefits to root cause analysis)
  • Predictability: the server always works as expected no matter what environment it’s in.  We no longer worry about cruft or manual, undocumented changes
  • Reliability: recovering from failures isn’t just easy, it’s automatic
  • Scalability: simply increase the server count or size.  No one needs to go build a server

Several months after the launch, the Webhooks servers in AWS began experiencing very high load and we weren’t sure why – there was nothing obvious like a spike in traffic causing it.  This high load caused servers to get slower over time (the kind of behavior usually attributed to a memory leak, fragmentation, or garbage collection issues).  Under normal circumstances, the servers would become too slow to process Webhooks for customers.

This is where the real win happened: when the server got too slow, the health checks began failing which caused the servers to be replaced it with a new one.  This happened hundreds of times over the course of several days – with zero customer-facing impact.  If this had been deployed in the traditional manner, we would have had numerous outages, late nights, and tired Operations and Development staff.  This was our first personal proof that “The Cloud” allows us to architect our services in a way that make them more resilient and self-healing.

When the code was repaired, it was automatically built, tested, and deployed with zero human intervention.  This is known as Continuous Delivery (CD).  It was just a few minutes between code-changed and in-production.  We were able to solve the problem without being under searing pressure (which causes mistakes) and without any pomp and circumstance.

The nerdy part of me was thrilled to see this in action and the not-so-nerdy part of me was thrilled that we literally suffered an event that was invisible to our customers.

2 Minute Builds

Background

The core of Quick Base is a large Microsoft Visual C++ project (technically it’s a solution with multiple projects).  Our build/deploy/test/upload-artifact cycle was 90 minutes.  It was automated.  Not bad for a 20-year-old code base, right?  Nah, we can do better.  We can do it in 2 minutes!

At least, that’s how we pitched it.  You can imagine the response.  Besides the obvious “that’s impossible!” sentiments, we were asked “Why?  It takes us longer than that to fully QA the resulting artifact, so what’s the point?”  And so, our journey began.

If there’s one thing I’ve learned over the years, it’s that the hard part isn’t the technology, it’s the human equation.  Here, we’d lived with a very slow build for years (because the belief was that we’d deprecate the old stack in favor of the re-architecture that was in progress).  Once we’d re-focused our efforts to iterating on the existing stack, we knew things had to change.  We were operating using Agile methodologies (both scrum and Kanban are in use) but the tools weren’t properly supporting us.  A few engineers close to the build knew there was low-hanging fruit; what better way to demonstrate “yes, we can!” and gather excitement than to make significant progress with relatively little effort.

Organizationally, we were now better-suited to support these kinds of improvements.  We have a Site Reliability Engineering team that consists of both Ops and Dev.  Together, we started to break down the problem.  We deconstructed the long Jenkins job into this diagram:

Picture1

Now we knew where to focus for the biggest gains.

Our First Big Win

The “Tools Nexus Deploy” was literally just Maven uploading a 250-MB zip file to Nexus from servers in the office in Cambridge, MA to our Nexus server in AWS (Oregon).  It definitely shouldn’t take that long to upload; we have a very fat Internet pipe in the office.  We did packet traces using WireShark and other network tests to try and determine the cause.  We didn’t uncover anything.

So, let’s break down the problem and isolate the issue.  Is the network in the office OK?  AWS?  Is the Nexus server slow?  Here’s some of what we did:

  • Download data directly from Nexus using wget (remove Maven from the equation)
  • Upload directly to Nexus using wget (ditto)
  • Do the above from the office servers (is it the server network?)
  • Do the above from office workstations (is it the entire Cambridge network?)
  • Do the above from EC2 instances in AWS (Oregon) (remove Cambridge from the equation)
  • Try a (much) newer version of Windows that hasn’t been hardened (maybe issues with TCP windowing and other high-latency improvements)
  • Do the above from linux instead of Windows (remove Windows from the equation)

When we switched from Windows to linux, we stood back in disbelief.  The upload was now taking 90 seconds instead of 22 minutes.  We found that Maven on Windows has extremely poor network performance.  We temporarily switched to Maven on linux by splitting up the build job to have a separate upload job that was tied to the Jenkins master node (running linux).

Our Second Big Win

The next thing we tackled was the “PD CI-Test” group.  These are just TestNG Java tests that hit the Quick Base API to do some automated testing.  We found one simple area to improve: add test data using bulk import instead of per-record inserts.  Since this was in setup code, the several-second difference added up to … drum roll … 18 minutes!

Number Three

There was still lots of room for improvement in the “PD CI-Test” group, so we found one other quick win.  After we’d encountered the Maven slowness, we started to question the speed of Ant on Windows.  The server was only running at 20% CPU when the tests were running, so we suspected something wasn’t going as fast as it could.  Switching our tests to be called via Gradle instead of Ant saved us another 12 minutes!

Assessing Where We Are Now

In 2 months, our diagram looked like this:

Picture2

You can bet that was exciting!  Now we have the momentum and people believe it can be done.

We’ve continued to make further improvements such as moving from the aging hardware in the Cambridge server room to AWS using the Jenkins EC2 plugin (and then taking advantage of the C5 instance types (which boot our Windows AMI in 4 minutes instead of 10) by building our own version of the plugin and submitting a PR for it here).  Build times are currently averaging 26 minutes and we’ve got items on the roadmap (including moving to Jenkins pipeline so we can easily take advantage of parallelism) that should get us closer to 15.  After that, we run into limitations of the MSVC++ linker that does a few things single-threaded; one of our projects is quite large and produces a single binary.  The next steps there include breaking that project up (e.g. use libraries).  That will take more effort, so we’ve left that for last.

Will we ever get to 2 minutes?  Who’s to say?  The purpose of setting the goal that low was to fire up people’s imaginations.  And it has.

Once Upon a Time …

Quick Base is the platform that businesses use to quickly turn ideas about better ways to work into apps that make them more efficient, informed, and productive.  It has been around for nearly 20 years.  It’s a successful SaaS offering serving billions of requests per month.  It’s primarily written in MSVC++ running on Windows.  If you’ve been in the software industry long enough, you can imagine some of the tech debt acquired over its lifetime.  It makes very efficient use of server hardware but it’s grown past the point where it needs to let go of the old ways of doing things (which were appropriate “back then”).  Namely, there are monoliths to break down, automated test harnesses to build, code to rewrite to be testable, and build systems to re-think.

This story begins about 5 years ago when we started having the re-architecture discussions that most software companies do as they start having “success-based problems.”  At that point, Quick Base was essentially 100% C++ on Windows with an ever-increasing success rate with companies that wanted to store more data in their apps, have more users accessing their apps, and create more complicated apps than we could ever imagine.  Internally we constantly refer to the performance characteristics of an application ecosystem as the combination of those 3 things: size, concurrency/activity, and complexity.  That means there’s no single lever we can pull to increase how apps scale and succeed on our platform.

As a way to better meet these challenges, we became hyper-focused on solving for developer productivity, which included taking a look at what languages the talent pool was most familiar with, languages that had strong testability characteristics and support, as well as what languages would support an evolution – we wanted both code bases (C++ and the candidate) to, e.g., share a connection with the SQL server without having to manage what flags mean what in two places and risk getting that wrong.  C#/.NET was an obvious choice, and became the winner … at least for a short while.  We did build some stuff in .NET (and continue to do so today; you’ll read more of that in later posts), but this approach didn’t last long.

The belief that consolidating technologies to support better economies of scale (software contracts, support, staffing, you name it) was overwhelming and ultimately sent us down the wrong path.  We started building on technologies that had integration challenges with the existing platform, and we couldn’t take advantage of our existing SDLC (think: build/test/deploy as well as the IDE).

And then, we fell into the trap that many software companies do as our approach evolved into a complete re-architecture.  We believed the only viable way to go from old to new was to start over and migrate our customers.  We believed that incrementally breaking down the monolith was not possible.  So, we spawned a small scrum team to do a PoC, which turned into 2 teams, then 3, and then a business decision to put most of our engineering effort into the re-architecture in order to focus and just get it done and behind us.

All along the way there was that little voice inside that kept telling us this was wrong.  It occasionally came out during moments of frustration, or over lunch, or over a drink down the street.  But we succumbed to the inertia of the high-speed train.  We further exacerbated the issue by materially changing execution strategies at least 3-4 times because we discovered how difficult it was to recreate even an MVP of Quick Base.  After 4 years of producing something that ultimately didn’t deliver value to our customers, we drew up the courage to have a heart-to-heart with ourselves and canceled the project.  Why?  We obviously weren’t delivering value yet, and (once we were honest with ourselves) we knew we wouldn’t for a while – too long.  It’s excruciatingly hard to abandon something you’ve poured years of your heart into and feels “so close to shipping” (but in reality, it’s not).  It feels like you’re abandoning a child.  We found that belligerently asking “will this deliver customer value (in a timely fashion)?” gave us the strength and clarity to make the hard decision.

Did we all come to work one day and just stop working on the re-architecture and begin working on the existing platform?  For many reasons, no.  We needed to go back to the drawing board with our roadmap, and we had to somehow shift our development organization from 10% C++ / 90% Java/NodeJs to 75% C++ / 25% Java/NodeJs.  That’s right, we are continuing to work with the newer technologies.  We didn’t throw everything away.  In fact, we actually kept a lot of it.  As it turns out, we discovered through our journey that the fastest and most sustainable way to deliver more value to our customers was to iterate on the technology we have and tactically augment it with the new services and paradigms we’d originally built to serve the re-architecture.

Just like mistakes are our biggest teachers, so was the re-architecture.  We didn’t completely waste 4 years of our lives and money.  In fact, we know more about our customers, ourselves, the market, the technology, our own “secret sauce,” how to build & test & deploy software, and much more.  We have a new strategy which allows us to deliver value to our customers on an ongoing basis (starting right yesterday) while making meaningful progress on our software architecture as well as our build/test/deploy systems.  For me personally, I learned at the highest rate I’ve ever learned during the last several years.  And that learning (and some of the systems that were built during the re-architecture) are serving our new approach well.  Many of the upcoming posts will discuss things we built and learned over the last 4 years.