Test Coverage

Latest Comments

No comments to show.
Metrics-Reporting

“Is this feature fully tested?”, “Did you test everything?”, “What’s the status of coverage?”these are the types of questions that a tester would usually get from non-testers in a software development project. They are variants of each other, all gravitating around the concept of ‘coverage’. So, what is coverage?

Among the things most commonly meant by “coverage” are:

• test progress;

• the number of conditions of a given feature that are associated with at least one test case;

• the number of features of a given test object that are associated with at least one test case;

• a mixture thereof;

• the relative amount of project artefacts such as user stories, use cases, and so forth that the team of testers looked at.

The problem is that this diverse, and very often inaccurate, usage leads to a lot of misunderstandings, inefficiencies, and team-internal conflicts. A solution lies in making sure that at least the basics of software testing are well-known beyond the test team proper. Also, it is vital to agree on the meaning of central technical terms.

The International Software Testing Qualifications Board defines many central terms and adhering to, for example, their definitions might be a possible strategy towards achieving project-wide common ground. But it is not about sticking to a particular glossary. The point, rather, is that some words simply already bear some very specific meaning. Altering it, substituting it, or, even worse, shuffling it leads to avoidable confusion in a project.

“Coverage” is especially tricky as it is commonly used among project managers or inexperienced test managers. As a project member, make sure that all relevant terminology is used consistently across the projectfor example, by creating a dictionary. Moreover, make sure that everyone is aware of the fact that a common understanding can be vital for the success of projects, in particular with regard to technical terms.

First of all, coverage has absolutely nothing to do with progress. If anyone on the project uses the term this way, correct it! However, questions like “Is this fully tested?”, “Did you test everything?” etc. point to a more general issue. It is widely accepted in the testing profession that exhaustive testing is unwanted and, in fact, not attainable. To non-testers, this often comes as a surprise. Questions like those above might indicate some kind of conflict that goes beyond mere misunderstanding.

Additional conflicts might arise due to the fact that “coverage” is ubiquitous. Developers talk about is, as well as project managers. But while developers typically know that they mean different things, project managers might be unaware of it. When developers use the term, they usually mean a static testing metric. That, for sure, is not what project managers are interested in. So, when they state that ‘we must increase coverage’ or ‘we must reach a certain level of coverage’ you need to ask them: What exactly do you want to cover?I’ll cover whatever you want. Just tell me what it is!

In contrast to test progress, conditions and features are far more relevant to coverage. And the same is true for user stories, use cases, acceptance criteria, epics, and similar project artefacts. It is important, though, to have a precise number at hand against which to compare one’s test cases.

Test conditions are difficult in that they areto a certain extentsubjective or variable. Leaving trivial cases aside, it is often debatable how many test conditions are contained in a given feature. So, while conditions are a possible metric, it might not be the perfect choice.

Features are better suited to serve as a figure against which to track coverage. But they are too coarse-grained. That is, you want your entities to be discretely covered. Being covered is not a matter of degree. But given its coarse-grained nature, a feature is, in fact, coverable gradually. The same reasoning applies to use cases and epics.

While a product owner, project lead, etc. might ultimately be interested in features (use cases, epics) being covered, it is more adequate to derive feature-based coverage based on some fundamental metric.

Acceptance criteria are difficult for two other reasons. Firstly, where epics are too coarse-grained, acceptance criteria are too fine-grained (as far as coverage is concerned). You are normally not interested in partial coverage of a story. Secondly, the criteria/story ratio is volatile. Depending on therelatively arbitrarynumber of acceptance criteria per user story, you would get varying degrees of coverage, even for constant numbers of test cases.

This is why, I’d suggest, the ideal candidate of a coverage target is a user story. It exhibits just the right level of grain: (i) it is an independently ‘shippable’ entity from the dev perspective; (ii) it is discretely coverable; (iii) it is ‘interesting’ from the product’s point of view. The only viable alternative is, I reckon, a requirement. But even then, the central artefact type from a test management perspective would still be the story, as you would design test cases per story and use the latter as a vehicle to track requirements coverage only indirectly.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *