<-- background -->
Last updated on
Plutora Blog - Test Environment Management

Control Costs: Scale Down Test Environments

Reading time 5 minutes

You’d be surprised how often organizations insist on running a carbon-copy of a production network for a rarely-used staging environment; this is unnecessary. You don’t need a million-dollar staging environment sitting next to your production environment. You should control your test environment costs.

If you are searching for strategies to reduce the cost of your test environments, one approach is to scale down their size. Test environments can be much smaller than production environments, and you can make smart choices about VM sizing to ensure that your test environments don’t break the bank.

Scale Down Test Environments

If you run a large website or a popular app, your production system has hundreds of servers with several pairs of clustered databases running on the most expensive hardware. Maybe you have app servers with 32 cores and 200 GB of RAM and databases with more storage than you thought possible to run on SSD. You’ve decided to spend money on production because it needs to scale. You also have a QA staff telling you that the only way to qualify software for production is to have a staging system that has the same specs as production.

You don’t require that same level of firepower in your test environments as you do in production. You can run smaller clusters of application servers and use less infrastructure, as only a handful of employees use your QA systems. What you are hearing from your QA staff is superstition. This idea that any difference between staging and production is unacceptable is a holdover from an age when production systems were much smaller. If you are running a very large, complex system, it is economically infeasible to “recreate” production.

Despite this fact, there will always be a chorus of developers telling you that your pre-production systems must be equal in every way to the size and scale of production. Don’t listen. QA and Staging support testing processes focus not on the scale, but on quality. You need just enough hardware to support software qualification.

While your production system might need to scale to ten thousand TPS, your QA and Staging systems might need to scale to two or three TPS. While your production system supports a million simultaneous users, your QA and Staging systems support ten maybe twenty simultaneous testers. Don’t drop a couple million on database hardware in staging just because it would make your QA team feel better if the software was verified on the same infrastructure. You don’t need it.

Is it an Accurate Representation of Production?

But, don’t scale down to one server. You’ll need to test some level of redundancy. Your Staging and QA systems should use the same clustering approach as production, and you should aim to test your system with a minimum level of redundancy: four servers in two data centers (a 2×2). If you have a multi-datacenter production network, you should be testing your system with a multi-datacenter cluster. Doing this will allow you to test failover scenarios and other issues encountered in production. There is some wisdom in recreating some level of redundancy to test clustering, but you can’t afford to run a carbon-copy of production.

This is especially true if you run systems to support highly scalable web applications. If your production cluster is tens of thousands of machines backed by petabytes of data and more systems than you can keep track of, it will be economically unfeasible for you to run a “copy” of production for use as a staging environment. If you have a two-thousand-node cluster running an application server, your QA and Staging environments can get by with a four-node cluster. Testing environments are for software quality testing, and for testing assumptions made by developers before code hits production.

What about a Performance Testing Environment?

There are times when your QA or a performance QA environment may need to scale to the same level of capability as your production systems, but you should explore using dynamic, cloud-based infrastructure to achieve this temporary level of scale in QA. Use a public cloud provider to temporarily grow QA into a PQA environment that you can use to test architectural assumptions, but don’t establish a permanent PQA environment at the scale of production.

Instead, Create and Test a Performance Model

If you develop applications at scale, you can avoid having to scale QA to production sizes by creating a reliable “performance model” of your system in production.

What is a “performance model”? A performance model allows you to qualify that a system will scale by running a smaller set of servers in staging and QA. If your performance testing efforts develop a model of system behavior on a few servers, you can then test how this model scales to production. It should be the job of a performance testing team to understand how the performance of a system in QA represents the performance of a system in production. If you perform these tests regularly, you can qualify software with far fewer servers, and achieve dramatic cost savings on test environments.

An example is a system that uses an application server as well as several databases. To develop a performance model that will let you scale your assumptions from a small cluster of QA servers to production, you’ll need to conduct experiments to understand what your bottleneck is, and how the system scales with increased cluster sizes. This model will help you scale to meet demand, and it will also help to control costs associated with QA and Staging because you’ll be able to qualify the system on a much smaller cluster size.

@daliborsiroky Dalibor Siroky

Dalibor is the Co-founder and Co-CEO of Plutora. He has 15 years of leadership, consulting, enterprise product, and operations experience across Australia, Asia and Europe. He has proven ability to build high performance teams, turn around situations, develop innovative products, and create lasting value. Prior to Plutora, Dalibor was founder and managing director of Finotaur, a leading provider of independent management consulting services. Before that he served as CIO of financial advisory software at Macquarie Bank, head of solution architecture at Commonwealth Bank of Australia, and management consultant at PricewaterhouseCoopers. Dalibor got his MBA from the University of Chicago Booth School of Business. Follow him on Twitter @DaliborSiroky.

Learn More

Got an opinion? Leave a comment.

- About Plutora -

Our mission is to enable companies to manage their enterprise IT pipeline, enterprise IT releases, and IT environments in a simple and transparent manner. Learn about us or find out more about our products.