Skip to main content

Editor’s Note: Welcome to the Leadership In Test series from software testing guru & consultant Paul Gerrard. The series is designed to help testers with a few years of experience—especially those on agile teams—excel in their test lead and management roles.

In a previous article, I took you through how to manage performance testing. Now we’ll discuss IT infrastructure software, testing infrastructure, and test environments.

Sign up for The QA Lead newsletter to get notified when new parts of the series go live. These posts are excerpts from Paul’s Leadership In Test course, which we highly recommend to get a deeper dive into this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

Infrastructure is the term we use to describe all the hardware, cloud services, networking, supporting software, and our application under test required to develop, test, deploy, and operate our systems.

It makes sense not to limit our definition to technology, though. Data centers, office space, desks, desktops, laptops, tablet computers, and mobile phones with their own software stacks installed are all part of the ecosystem required to develop, test, and deploy systems.

If you include developer tools, DevOps tools and procedures, test tools, and the business processes and subject-matter expertise required, there's even more to it.

The most mundane things—the access codes or smart cards used to gain access to buildings—can turn critical if they aren’t in place.

Infrastructure, in all its variety, exists to support your systems' development, testing, deployment, and operations. It is either critical to testing or needs testing.

We’ll look at tools for development, testing, and collaboration in the next article. In this article, we’ll consider what most people regard as test environments and briefly look at what is often termed infrastructure testing. I’ll be covering:

Let’s go.

Test Environments

All testing makes an implicit, critical, simplifying assumption: that our tests will be run in a known environment.

What is an Environment?

All systems need to be tested in context. For a test to be meaningful, the system must be installed, set up, deployed, or built in a realistic environment that simulates the real world in which it will be used. 

We might use test scenarios that push the systems’ capability in terms of functionality, performance, or security, for example, but these are properties of the tests, not the environment.

A realistic environment would replicate all of the business, technical, and organizational environments. Much of this comprises data used to drive business processes, configure the system, and provide reference data.

But perfectly realistic environments are usually impractical or far too expensive (even testers of high-criticality systems such as airplanes, nuclear reactors, or brain scanners have to compromise at some point). Almost all testing takes place in environments that simulate the real world with some acceptable level of compromise.

Cars are tested on rolling roads, wind tunnels, vibration beds, and private test tracks before they are tested on the open road. Computer systems are tested in software labs by programmers and software testers before end-users are engaged to try them out in a production-like environment.

To ensure your test environments are up to industry standards, consider integrating one of these top-rated test management platforms.

Discover how to deliver better software and systems in rapidly scaling environments.

Discover how to deliver better software and systems in rapidly scaling environments.

  • By submitting this form you agree to receive our newsletter and occasional emails related to the CTO. You can unsubscribe at anytime. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Test in Realistic Environments

Simulated environments are fallible, just like our requirements and test models, but we must live with that.

We must stage tests that are meaningful in the environments we have available, and test outcomes mean what we interpret them to mean.

The reliability of test outcomes is dependent on the environment in which tests are run. If a test is run in an environment that is incorrectly set up:

  • A test that fails may imply the system is defective when, in fact, it is correct.
  • A test that passes may imply the system is correct when, in fact, it is defective.

Both situations are highly undesirable, of course.

Getting Environments Set Up and Delivered in Time

Even with the emergence of cloud infrastructure, test environments can be difficult and expensive to set up and maintain.

When support teams work on the new production environment, the testers demand test environments (and perhaps several of them). Later in testing, support teams often have competing demands.

Developer environments or any later test activity may be delivered late or not at all, or they may not be configured or controlled as required. Inevitably, this will delay testing and/or undermine confidence in any outcome from testing.

A critical task is to establish the need and requirements for an environment to be used for testing, including a mechanism for managing changes to that environment—as soon as possible.

Infrastructure as code is a recent evolution in how environments can be constructed with tools following procedures and using declarative code to define the environment setup. 

Although base operating system platforms (servers) can be created easily in the cloud or as virtual machines in your own environment, fully specified, special-purpose servers with all required software, data, configurations, and interfaces take more effort.

When setting up your testing infrastructure, it's crucial to integrate reliable database management software for optimal performance

However, once set up, they provide a highly efficient means of environment creation. Infrastructure code can be source-controlled just like any application code and managed through change.

A major tenet of continuous delivery is that, as soon as possible, some software—not even anything useful—should be pushed through the delivery pipeline to prove the processes work. 

Of course, this needs viable environments for builds, continuous integration tools, system-level testing, and deployment. The aim is to deploy test and production environments without constraint. Once the environment definitions and deployment processes are in place, generating environments becomes an automated and routine task.

Under any circumstance, the definitions of these environments are an early deliverable from the project.

Development Environments

Developer testing focuses on the construction of software components that deliver features internally to the application or at the user or presentation layer. 

Tests tend to be driven by knowledge of the code's internal structure, and they may not use or require ‘realistic’ data to run. Tests of low-level components or services are usually run through an API using custom-built or proprietary drivers or tools.

The range of development tools, platforms, and so-called Integrated Development Environments (IDEs) is huge. In this article, we can only touch upon some of the principal test-related requirements and features of environments.

To support the development and the testing in scope for developers, environments need to support the following activities. This is just a selection—there may be additional or variations of these activities in your situation:

  • A ‘Sandbox’ environment to experiment with new software. Sandboxes are often used to test new libraries, develop throwaway prototype code, or practice programming techniques. All common programming languages have hundreds or thousands of software libraries. Sandboxes are used to install and test software that is not yet part of the main thread of development to evaluate it and practice using it. These environments may be treated as disposable environments.
  • Local development environment. This is where developers maintain a local copy of a subset or all of the source code for their application from a shared code repository and can create system builds for local testing. This environment allows developers to make changes to code in their local copy and to test their changes. Some tests are ad hoc and perhaps never repeated. Other tests are automated. Automated tests are usually retained for all time, particularly if they follow a Test-Driven approach.
  • Shared (Continuous) Integration environment. When developers trust their code is ready, they push their changes to the shared, controlled code repository. The CI environment performs automated builds and executes automated tests using the repository. At this point, the new or changed code is integrated and tested. The CI system runs automated tests on demand, hourly or daily, and the whole team gets notifications and can see the status of tests of the latest integrated build. Failures are exposed promptly and dealt with as a matter of urgency.

A development or CI environment supports developer testing, but other application servers, web services, messaging, or database servers that complete the system might not be available. 

If these interfacing systems don’t exist because they haven’t been built, or because they belong to a partnering company and there is only a live system and no test version, then developers have to stub or mock these interfaces to at least be able to test their own code. 

Mocking tools can be sophisticated, but mocked interfaces cannot usually support tests that require integrated data across multiple systems.

If an interface to a test database server is available to developers, their test data might be minimal, not integrated or consistent, and not representative of production data.

Development databases shared across a team are usually unsatisfactory. If there isn't a good regime for managing this shared resource, developers might reuse, corrupt, or delete each other’s data.

System-Level Test Environments

System-level testing focuses on the integration of components and sub-systems in collaboration. 

These environments provide a platform to support the objectives of larger-scale integration, functionality validation, and system operations in the context of user or business processes.

Environments might also be dedicated to the non-functional aspects of the system, such as performance, security, or service management.

Works On My Machine Graphic

One of the most common testing pitfalls is where a system tester in their environment experiences some kind of failure, but no matter how hard they try, the developer or tester cannot reproduce the failure in the development environment.

“It works on my machine!”

“Yes, of course, it does.”

This is almost certainly caused by a lack of consistency in the two environments. The difference in behavior might be caused by the configuration, some software version differences, or the difference in the database data.

Differences in data causing problems are the first thing to check. They are usually easy to identify and can often be resolved quickly.

When there is a software version or configuration discrepancy, testers and developers can waste a lot of time tracing the cause of the difference in behavior.

When these problems occur, it often means there is a failure in communication between the developer and the test. It could also indicate a loss of configuration control in the developer or test environment setup or the deployment process.

Infrastructure as code and automated environment provisioning will make environment consistency problems a thing of the past.

Types of Dedicated Test Environments

To support system, acceptance, and non-functional testing, environments need to support the following activities (there may be more in your organization):

  • (Functional) System testing environment. In this environment, the system is validated against the requirements documented for the system as a whole. Requirements may be large text documents with tabulated test cases defined for a system test. In agile projects, this environment may be necessary to allow testers to explore the integrated system without limiting themselves to specific features.
  • End-to-end test environment. Where the CI environment allows components to be integrated with subsystems, the business processes may require other interfacing systems (not under the control of developers) to be available. Full-scope environments are required to conduct large-scale integration, business process, or overall acceptance testing. Usually, data is a copy of live, or at least of appropriate scale. Where large-scale integration needs to be proven, the flows of data and control are exercised using longer user journeys and independent reconciliations of the data across integrated systems. Managing data in test environments is crucial. If you're already using Jira, consider enhancing your data management capabilities with advanced test management tools designed for Jira.
  • Performance environment. These environments must provide a meaningful platform for evaluating the performance of a system (or selected sub-system(s)). Compromises in the architecture may be possible where there is redundancy or cloning of servers. However, the data volumes must be of production scale even if the data is synthetic. Certainly, the environment needs to be of a scale to support production transaction volumes to enable useful prediction of the performance of systems in production.
  • Availability, Resiliency, and Manageability (ARM) environments. In some respects, these environments are similar to the performance environments, but depending on the test objective, variations may be inevitable. Availability testing aims to verify that the system can operate for extended periods without failing. Resilience testing (often called failover testing) checks that when system components fail, they don't cause an unacceptable disruption to the delivered service. Manageability or operations testing aims to demonstrate that system administrative, management, and backup and recovery procedures operate effectively.

Data in Environments

In some very large projects, there can be as many as 20 or even 30 large-scale environments dedicated to different aspects of testing, training, data migration, and trial cutovers. In smaller projects there will be fewer, perhaps only one shared environment or a continuous delivery regime—and all testing might be implemented automatically in environments that are instantiated for a single-use, then torn down.

All environments need data, but the scale and degree of realism of that data can vary. Here are some common patterns in the way test data is acquired and managed. These patterns focus on ownership (local or shared) the means of creation (manually, automated, or copied from production), and scale:

  • Local, manually created, small-scale data—suitable for ad-hoc testing by developers or testers.
  • Local, automated, synthetic data. Suitable for developer automated tests or environments where the functionality of specific modules or features can be covered.
  • Shared, manually created data. Used in integration and system test environments, often where test data has evolved in parallel with manually run tests. Backed up and restored when required.
  • Shared, automatically created data. Used in integration and system test environments where test data has evolved in parallel with automated or manually run tests. Generated and/or restored from backups, when required.
  • Shared large-scale synthetic/random data. Performance and ARM testing require coherent data in large volume. This data does not usually need to be meaningful – randomised data works fine and is generated when required, or generated initially and restored from backups.
  • Shared large-scale meaningful data. End-to-end or acceptance or user testing usually needs meaningful data at scale. Sometimes copies or extracts from live data are used. Beware you don’t fall foul of data regulations if you don’t scramble/anonymise it, however.
  • Re-testing and regression testing. You will require a known, controlled dataset, in a known state, so it is usually restored from backups. This applies to any of the environments above as these tests need to be re-run with data in a known state to reproduce failures reliably.

Infrastructure Testing

At the start of this article, we looked at what infrastructure includes and since then we’ve primarily focused on the technical components, namely the software systems, and presumed that the hardware—real or virtual—is available.

When we build systems initially, we presume that the infrastructure exists and that it functions correctly, is performant, secure, and resilient, etc.

We can test all these aspects when we have integrated our application, and no doubt expose shortcomings in the infrastructure at a relatively late stage in our projects. But finding infrastructure problems so late in a project is usually extremely disruptive.

  • Changes to address failures in infrastructure might require significant redesign and changes in our application.
  • Results from our application or whole-system testing will need repeating.
  • If 3rd party components such as database, web, networking or messaging services fail, we are at the mercy of the suppliers (or the open source community) who support them.

To ensure that our confidence in infrastructure components is well-founded, we can rely on our (or others’) experience of using them in the past. Or we need to assess—through testing—their reliability before we commit to using them in the design and construction of our system.

Depending on the infrastructure under investigation, the environment we use may vary—from a single server to a near-complete infrastructure platform. 

Although some tests will be manual, mostly we will use tools, drivers, or robots to simulate the transaction load our application would generate. We would need to mock or stub-out these interfaces:

  • Interfaces that are currently not available
  • Interfaces to components that we trust and are easy to simulate
  • Interfaces not in scope that do not affect the infrastructure under test.

Infrastructure, needless to say, usually does not operate through a user or GUI interface.

The integration of our application to infrastructure will mostly take the form of messaging or remote service calls. Often the traffic to be simulated requires API calls to either web or application servers, message or database servers, or services delivered through the cloud or remote locations.

Performance and ARM objectives may be known, in which case tests can be performed to ensure these objectives are met.

However, infrastructure is often shared with other applications than our own, so knowing its ultimate capacity helps to gauge how much capacity will remain when our application is deployed.

In this case, infrastructure testing is addressing the risk to our own and perhaps other applications to be based on it in the future. 

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

Suggested Read: 10 BEST OPEN SOURCE TEST MANAGEMENT TOOLS

Paul Gerrard

Paul is an internationally renowned, award-winning software engineering consultant, author, and coach. He is the host of the Technology Leadership Forum and the Programme Chair of the 2014 EuroSTAR Testing conference.