Let’s take a trip back in time to the 1990s, when proprietary software reigned, but open source was starting to come into its own. What caused this switch, and more importantly, what can we learn from it today as we shift into cloud-native environments?
An infrastructure history lesson
I’ll begin with a highly opinionated, open source view of infrastructure’s history over the past 30 years. In the 1990s, Linux was merely a blip on most organizations’ radar, if they knew anything about it. You had early buy-in from companies that quickly saw the benefits of Linux, mostly as a cheap replacement for proprietary Unix, but the standard way of deploying a server was with a proprietary form of Unix or—increasingly—by using Microsoft Windows NT.
The proprietary nature of this tooling provided a fertile ecosystem for even more proprietary software. Software was boxed up to be sold in stores. Even open source got in on the packaging game; you could buy Linux on the shelf instead of tying up your internet connection downloading it from free sources. Going to the store or working with your software vendor was just how you got software.
Where I think things changed was with the rise of the LAMP stack (Linux, Apache, MySQL, and PHP/Perl/Python).
The LAMP stack is a major success story. It was stable, scalable, and relatively user-friendly. At the same time, I started seeing dissatisfaction with proprietary solutions. Once customers had this taste of open source in the LAMP stack, they changed what they expected from software, including:
- reluctance to be locked in by a vendor,
- concern over security,
- desire to fix bugs themselves, and
- recognition that innovation is stifled when software is developed in isolation.
On the technical side, we also saw a massive change in how organizations use software. Suddenly, downtime for a website was unacceptable. There was a move to a greater reliance on scaling and automation. In the past decade especially, we’ve seen a move from the traditional “pet” model of infrastructure to a “cattle” model, where servers can be swapped out and replaced, rather than kept and named. Companies work with massive amounts of data, causing a greater focus on data retention and the speed of processing and returning that data to users.