Quick Base is the platform that businesses use to quickly turn ideas about better ways to work into apps that make them more efficient, informed, and productive. It has been around for nearly 20 years. It’s a successful SaaS offering serving billions of requests per month. It’s primarily written in MSVC++ running on Windows. If you’ve been in the software industry long enough, you can imagine some of the tech debt acquired over its lifetime. It makes very efficient use of server hardware but it’s grown past the point where it needs to let go of the old ways of doing things (which were appropriate “back then”). Namely, there are monoliths to break down, automated test harnesses to build, code to rewrite to be testable, and build systems to re-think.
This story begins about 5 years ago when we started having the re-architecture discussions that most software companies do as they start having “success-based problems.” At that point, Quick Base was essentially 100% C++ on Windows with an ever-increasing success rate with companies that wanted to store more data in their apps, have more users accessing their apps, and create more complicated apps than we could ever imagine. Internally we constantly refer to the performance characteristics of an application ecosystem as the combination of those 3 things: size, concurrency/activity, and complexity. That means there’s no single lever we can pull to increase how apps scale and succeed on our platform.
As a way to better meet these challenges, we became hyper-focused on solving for developer productivity, which included taking a look at what languages the talent pool was most familiar with, languages that had strong testability characteristics and support, as well as what languages would support an evolution – we wanted both code bases (C++ and the candidate) to, e.g., share a connection with the SQL server without having to manage what flags mean what in two places and risk getting that wrong. C#/.NET was an obvious choice, and became the winner … at least for a short while. We did build some stuff in .NET (and continue to do so today; you’ll read more of that in later posts), but this approach didn’t last long.
The belief that consolidating technologies to support better economies of scale (software contracts, support, staffing, you name it) was overwhelming and ultimately sent us down the wrong path. We started building on technologies that had integration challenges with the existing platform, and we couldn’t take advantage of our existing SDLC (think: build/test/deploy as well as the IDE).
And then, we fell into the trap that many software companies do as our approach evolved into a complete re-architecture. We believed the only viable way to go from old to new was to start over and migrate our customers. We believed that incrementally breaking down the monolith was not possible. So, we spawned a small scrum team to do a PoC, which turned into 2 teams, then 3, and then a business decision to put most of our engineering effort into the re-architecture in order to focus and just get it done and behind us.
All along the way there was that little voice inside that kept telling us this was wrong. It occasionally came out during moments of frustration, or over lunch, or over a drink down the street. But we succumbed to the inertia of the high-speed train. We further exacerbated the issue by materially changing execution strategies at least 3-4 times because we discovered how difficult it was to recreate even an MVP of Quick Base. After 4 years of producing something that ultimately didn’t deliver value to our customers, we drew up the courage to have a heart-to-heart with ourselves and canceled the project. Why? We obviously weren’t delivering value yet, and (once we were honest with ourselves) we knew we wouldn’t for a while – too long. It’s excruciatingly hard to abandon something you’ve poured years of your heart into and feels “so close to shipping” (but in reality, it’s not). It feels like you’re abandoning a child. We found that belligerently asking “will this deliver customer value (in a timely fashion)?” gave us the strength and clarity to make the hard decision.
Did we all come to work one day and just stop working on the re-architecture and begin working on the existing platform? For many reasons, no. We needed to go back to the drawing board with our roadmap, and we had to somehow shift our development organization from 10% C++ / 90% Java/NodeJs to 75% C++ / 25% Java/NodeJs. That’s right, we are continuing to work with the newer technologies. We didn’t throw everything away. In fact, we actually kept a lot of it. As it turns out, we discovered through our journey that the fastest and most sustainable way to deliver more value to our customers was to iterate on the technology we have and tactically augment it with the new services and paradigms we’d originally built to serve the re-architecture.
Just like mistakes are our biggest teachers, so was the re-architecture. We didn’t completely waste 4 years of our lives and money. In fact, we know more about our customers, ourselves, the market, the technology, our own “secret sauce,” how to build & test & deploy software, and much more. We have a new strategy which allows us to deliver value to our customers on an ongoing basis (starting right yesterday) while making meaningful progress on our software architecture as well as our build/test/deploy systems. For me personally, I learned at the highest rate I’ve ever learned during the last several years. And that learning (and some of the systems that were built during the re-architecture) are serving our new approach well. Many of the upcoming posts will discuss things we built and learned over the last 4 years.