Software performance and scalability are frequent topics when we talk about application development. A big reason for that is an application’s performance and scalability directly affect its success in the market. An application, no matter how good its user interface, won’t claim market share if its response time is sluggish.
This is why we spend so much time improving an application’s performance and scalability as its user base grows.
Where usual testing practices fail
Fortunately, we have a lot of tools to test software behavior under high-stress conditions. There are also tools to help identify the causes of performance and scalability issues, and other benchmark tools can stress-test systems to provide a relative measure of a system’s stability under a high load; however, we run into problems with performance and scale engineering when we try to use these tools to understand the performance of enterprise products. Generally, these products are not single applications; instead they may consist of several different applications interacting with each other to provide a consistent and unified user experience.
We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components. The real numbers can be gathered only when we test the application in real-life scenarios, that is by subjecting the entire enterprise application to a real-life workload.
The question becomes: How can we achieve this real-life workload in a test scenario?
Containers to the rescue
The answer is containers. To explain how containers can help us understand a product’s performance and scalability, let’s look at Puppet, a software configuration management tool, as an example.