What is a Test Case?

Latest Comments

No comments to show.
Reporting-Test Case-Test Data-Test Management-Test Step

Test cases are important project artefacts that almost every discipline has contact with: project leads, developers, architects, business analysts, and testers, among others. Given their high relevance and visibility, it is quite surprising that, even among professional testers, there is no agreement on what a test case is. The ISTQB defines test cases as ‘a set of preconditions, inputs, actions, expected results, and postconditions’. That much seems to be clear.

Things tend to be less clear when it comes to the details. First of all, in most projects there is no clear (or explicit) distinction between abstract and concrete test cases, i.e. test data and parametrization are sometimes interwoven into test case specifications. This then leads to a blurred distinction between test case types and test case instances.

Even though that might be harmless in many instances, there might as well be contexts in which it mattered. For example, if there is no clear distinction between ‘abstract’ and ‘concrete’ or between ‘type’ and ‘instance’ at the level of test cases, chances are hight that this lack of conceptual rigour triples down to bug reports. They, then, might as well be too abstract or too concrete, depending on the context.

A bug report should always be as concrete as necessary and as abstract as possible. So, if a specific user is unable to log in but all other users are unaffected, the bug report should include that information. Generally, a bug report should contain all relevant information required to reproduce the described fault state. If, on the other hand, an observed problem goes beyond a specific scenario, then the debugging done by the bug reporter should be as advanced as possible so as to make the report as abstract as possible.

The less ‘research’ required on the developer’s end, the better. And even though this might seem obvious, it is far from trivial in many cases. Oftentimes, improving a bug report beyond a mere description of a specific failing scenario requires significant additional work.

For example, it might be difficult to determine the common denominator of a series of failing scenarios. Or it might be difficult to find any other failing scenario, even though it is intuitively clear that the one that actually failed is probably not unique in that regard. Or it might be that a variety of failing scenarios are not reducible to some underlying common property but rather to many properties. One scenario failed because of property A and another one failed because of property B. Then a corresponding bug report would include a reference to the disjunction of A and B. Or it might be that the underlying property of a failure might not be easily discernible. Then it is up the bug reporter to isolate it to the best of his knowledge, even if that requires substantially more effort compared to documenting a failing example scenario. (I avoid the term “root cause” here because finding the root cause of some unexpected behaviour might go significantly beyond the scope of a bug report.)

Another consequence of a neglected type/instance distinction at the test case level is that your progress metrics are affected. That’s because if you do not know how to count test cases, you definitely don’t know the total number of test cases in your backlog (or, for that matter, the number of finished test case executions). But then you can’t know the test progress either, since that is determined based on tracking executions against the total number of cases to be executed.

Tooling might deteriorate or improve the situation. For instance, if the tool you’re using distinguishes test step logic from test data (e.g., by offering distinct input fields), then it’s far less likely to mix up the two. Hence, a tool might improve test case creation and thereby other areas of testing as well (execution, planning, steering, reporting, etc.).

A very popular ‘test management tool’ is Excel. People using Excel to keep track of their test cases tend to say—implicitly, that is—that one row in their sheet represents one test case. That might be correct. But there are other options. And there are probably also other testers on that project who also use that tool but interpret rows differently. For example, a row could also be (i) a concrete representative of an abstract test case; (ii) a test case step; (iii) a test suite; or (iv) a test charter. So, there are at least five different and, seen individually, plausible interpretations of the relationship between Excel rows and test cases.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *