Editor’s Note: Welcome to the Leadership In Test series from software testing guru & consultant Paul Gerrard. The series is designed to help testers with a few years of experience—especially those on agile teams—excel in their test lead and management roles.
In previous articles, we’ve covered what testing is and how to define a test strategy. The focus of this article is test models and how to use them to test and manage your testing.
Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!
Hello and welcome to article three in the Leadership In Test series. In this article we will be covering:
Let’s dive in.
What Is A Test Model?
Testing is a process in which we create mental models of the environment, the program, human nature, and the tests themselves. Each model is used either until we accept the behaviour is correct or until the model is no longer sufficient for the purpose.
Boris Beizer, Software Test Techniques, 1990
Test design is the process by which we select, from the galaxy of options available, the tests that we believe will be most valuable to us and our stakeholders. Test models help us to select tests in a systematic way and are fundamental to testing.
The way testers use models is this:
- We identify and explore Sources of Knowledge to build test models.
- We use these models to challenge and validate our sources, improving our sources and our models.
- We use these models to inform testing and also development.
Given a mission for testing, our first task is to identify Sources of Knowledge. These might be:
- Documentation: specifications, designs, requirements, standards, guidelines and so on.
- People: Stakeholders, users, analysts, designers and developers and others.
- Experience: your own knowledge and experience of similar (or dissimilar systems), your preferences, prejudices, guesses, hunches, beliefs and biases.
- New System: the system under test, if it exists, is available and accessible.
- Old System: the system to be replaced is obviously a source – it may, in some areas of functionality, provide an oracle for expected behaviour.
It’s important to note that all our sources of knowledge are fallible and incomplete and so are our models. Testers use experience, skill and judgement to sift through these sources, compare, contrast and challenge them, and to arrive at consensus.
All models are wrong, some are useful
George Box, Economist.
A test model might be a checklist or set of criteria. It could also be a diagram derived from a design document or an analysis of narrative text. Many test models are never committed to paper – they can be mental models constructed specifically to guide the tester while they explore the system under test.
The purpose of models is to simplify complex situations by omitting detail that is not relevant at this point of time. We use models to simplify a problem – selecting things to test, for example. The model informs our thinking and we select tests by identifying occurrences of some key aspect of the model.
We might select branches in a flowchart or control-flow diagram; state-transitions in a state model; boundaries in a model of an input (or output) domain; scenarios derived from user stories written in the Gherkin domain-specific language.
But beware, sometimes models make omissions that are not safe – perhaps the model over-simplifies a situation – and we need to pay attention to this. To make your test models more efficient, it's crucial to utilize specialized test management software.
If we don’t have models directly from our sources, we have to invent them. For example, where requirements are presented as narrative text, we need to use the language of requirements to derive features and the logic of their behaviour. This can be difficult for developers and testers, and often it’s a collaborative effort, but persist we must.
Using Models To Test
We use test models to:
- Simplify the context of the test. Irrelevant or negligible details are ignored in the model.
- Focus attention on one perspective of the behaviour of the system. These might be critical or risky features, technical aspects, user operations of interest, or aspects of the construction or architecture of the system.
- Generate a set of unique (within the context of the model) tests that are diverse (with respect to that model).
- Enable the testing to be estimated, planned, monitored and evaluated for its completeness (coverage).
From the tester’s point of view, a model helps us to recognise aspects of the system that could be the subject of a test.
Models And Coverage
Coverage is the term we use to describe the thoroughness or completeness of our testing with respect to our test model. A coverage item is something we want to exercise in our tests.
Ideally, our test model should identify coverage items in an objective way. When we have planned or executed tests that cover items identified by our model, we can quantify the coverage achieved and, as a proportion of all items on the model, express that coverage as a percentage.
Any model that allows coverage items to be identified can be used.
Models are often graphical with examples such as flowcharts, use cases, sequence diagrams and so on. These and many other models have elements (or blobs) connected by lines or arrows. These are usually called directed graphs.
Imagine a graphical model that comprises blobs and arrows. At least two coverage targets could be defined:
- All blobs coverage
- All arrows coverage
- And so on
So any model that is a directed graph can be treated in the same way.
A formal model allows coverage items to be reliably identified. A quantitative coverage measure can therefore be defined and used as a measurable target.
Informal models tend to be checklists or criteria used to brainstorm a list of coverage items, trigger ideas for testing, or for testing in an exploratory testing session. These lists or criteria might be pre-defined or prepared as part of a test plan, or adopted in an exploratory test session.
Informal models are different from formal models in that the derivation of coverage items is dependent on the experience, intuition and imagination of the practitioner using them, so coverage using these models can never be quantified. We can never know what complete coverage means with respect to these models.
Tests derived from an informal model are just as valid as tests derived from a formal model if they increase our knowledge of the behaviour or capability of our system.
Models Simplify, So Use More Than One
A good model provides a means of understanding complexity and achieves this partly by excluding details that aren’t relevant. Your model might use the concept of state, failure modes or flows, input combinations, domain values and so on.
One model is never enough to fully test a system. All models compromise, so we need multiple models. This concept is normally referred to as ‘diverse half-measures’. In practice this means we need a diversity of partial models to properly test a system.
Although not usually described in these terms, the test stages in a waterfall project use models from different perspectives. Unit, subsystem integration, system-level and user testing all have differing goals; each uses a different model and perspective – this is where diversity comes from.
Using Models To Manage
Models are at the heart of testing and also of test management. There are four key aspects to this:
- Stakeholder engagement
- Scope
- Coverage
- Estimation And Progress Monitoring
Stakeholder Engagement
When we plan and scope tests, and explain progress and the meaning of coverage to stakeholders, we must use models that are understandable and meaningful to stakeholder goals.
If we plan a user test, we’ll probably adopt the business process flow as our model and as a template for tracing paths to exercise system features that matter to the user. If we are testing integration of service components on behalf of a technical architect, we’ll use the architectural model, collaboration diagrams, interface specifications and so on as the basis of our testing. If we are testing features defined by users, we’ll use the user stories that were the outcome of earlier collaborative requirements work.
If stakeholders don’t understand your models, they will not understand, trust or invest in your testing. They may not even trust you.
Managing Scope
The first activity in a systems thinking discipline is to define a system boundary. In testing, the first model you define will help you to scope the testing. The diagram below is a schematic of the system architecture – a ‘system of systems’ – in an organisation.
Each system (the concentric circles) sits within an application area such as CRM, accounting or the website. All of the systems and application areas sit inside the ‘system of systems’. There is no detail on the model, of course, but it’s plain to see how each system fits into the overall architecture.
We could easily define the scope of our testing to be the ERP systems for example.
In the second diagram below, we have added some more detail to the system architecture and suggested three ways we could define scope more specifically.
- The systems shaded in yellow are the so-called systems of record. These systems might share a database, for example, and changes to the database schema could affect any of these systems adversely – and be in scope for test.
- The systems enclosed by the purple line might share some common functionality or infrastructure – perhaps they all use a common set of web services, the same messaging system, or run on the same server.
- The dashed blue line denotes a user journey which utilises the systems connected by the line. Perhaps the user-journey has changed and our focus is on the consistency and accuracy of the flow of data between these systems.
A model can show what is in scope for testing but, just as importantly, what is out of scope too.
A model helps to define the scope of a test and also to explain scope to stakeholders in terms they understand, appreciate and (hopefully) approve of.
When we use a model to define scope, the model defines the territory and the items in scope identifies the places we intend to explore and test.
Managing Coverage
Coverage measurement can help to make testing more manageable. If we don’t have a notion of coverage, we may not be able to answer questions like, ‘what has been tested?’, ‘what has not been tested?’, ‘have we finished yet?’, ‘how many tests remain?’ This is particularly awkward for a test manager.
The coverage we plan to achieve is the natural next step once scope is defined.
We use the scoping model to define the places we will test. Our coverage model tells stakeholders how thoroughly we plan to test in those places.
Test models and coverage measures can be used to define quantitative or qualitative targets for test design and execution. To varying degrees, we can use such targets to plan and estimate. We can also measure progress and infer the thoroughness or completeness of the testing we have planned or executed. But we need to be very careful with any quantitative coverage measures or percentages we use.
A coverage measure (based on a formal test model) may be calculated objectively, but there is no formula or law that says X coverage means Y quality or Z confidence. All coverage measures give only indirect, qualitative, subjective insights into the thoroughness or completeness of our testing. There is no meaningful relationship between coverage and the quality or acceptability of systems.
Quantitative coverage targets are often used to define exit criteria for the completion of testing, but these criteria are arbitrary. A more stringent coverage target might generate twice as many items to cover. However, twice as many tests costing twice as much do not make a system twice as tested or twice as reliable. Such an interpretation is meaningless and foolish.
Sometimes, the formal models used to define and build a system may be imposed on the testers for them to use to define coverage targets. At other times, the testers may have little documentation to work with and have to invent models of their own. The selection of any test model and coverage target is somewhat arbitrary and subjective. Consequently, informal test models and coverage measures can be just as useful as established, formal models.
Quick Model Making Exercise
A quick exercise to finish on. In the following examples, sketch out what you think the model might look like – its shape only – either as a picture/diagram, a table or a list:
- Users are concerned about the end to end journeys in the system.
- A messaging service can be in four states, shut down, running, starting up and shutting down.
- An insurance premium calculator has 40 input values which, in combination, influence the calculation; there are dependencies between some inputs.
- An extract/transform/load process has seven stages. After extraction, each stage either rejects records, sends them to a suspense file, transforms and passes records to the next stage. The last stage is a load process which handles rejections. Test all extracted records are accounted for.
And that’s it for test modelling, thanks for reading. We’ll be referring back to models throughout the series, so stay tuned!
Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!
Related List of Tools: 10 LATEST SOFTWARE TESTING TOOLS QAS ARE USING