Bug reports are one of the most crucial artefacts for software testers. They are dealing with them on a daily basis, maintain them, use them for confirmation testing, and derive further actions from them. High-quality bug reports lead to smoother workflows, for they cause less questions by developers, ease reproduction of failures, and make confirmation testing easier.
Bug Report Structure
Over the past few years, I noticed two anti-patterns that can be observed in many software engineering projects.
Test Object Mismatch
Any bug report should clearly indicate its test object. The relevant test object should match the test object targeted by the bug backlog, where backlog delineation might be done on the basis of projects, issue titles, labels, or any other means that the role responsible for handling the backlog considers appropriate. A mismatch might happen if, for example, it is unclear what the test object is. It might also happen due to misclassification. This is especially relevant in contexts where classifications concern component level, but it might happen at system level too.
A common sub-variant of this anti-pattern is that reports on failing tests are created. A failing test does not imply a bug in the test object. It could be caused by a bug in the test implementation, for instance. If that was the case, one would usually create an issue for this—but only if sufficiently demarcated from bug reports, which concern only the test object. Otherwise, these issues would negatively influence any statistics on product quality and, hence, lead to invalid interpretations.
Relatedly, even if a failing test, after careful preliminary root cause analysis, turns out to be caused by a bug in the application, the report itself should never be about the failing test. Of course, failed test executions might be cited as evidence for that bug or as hints toward possible causes. But the report itself should always focus on just the test object and how its behaviour deviated from what was expected.
Nominal Condition Ambiguity
Some aspects of bug reports are optional, others are mandatory. One of the most essential aspects is the contrasting pair of ‘expected behaviour’ vs. ‘actual behaviour’. Arguably, this is even the core of any bug report, with other aspects being more or less neglectable.
Sometimes, however, this contrast is missing. This, again, can be for two distinct reasons. Behaviour expectations might be missing entirely. If that’s the case, issue creators might want to consider using templates instead of freestyle reports. It might also be that behaviour expectations are missing substantiations. They may vary by project context, but typically they’d include stakeholder opinions, specification elements, user stories, or documentation. Evaluating substantiations might also give the person responsible for maintaining the backlog a chance to decide in cases of ambiguity (prioritization, bug vs. technical debt vs. desired behaviour, etc.), without consulting the issue creator or other stakeholders.
There is one exception that needs to be considered in this context: obvious deviations. Sometimes, good bug report just contain one screenshot, for instance. In these cases, it might make sense to at least make transparent that you, as an issue creator, are aware of your duty to provide contrast. So, I’d still stick to the ‘expected vs. actual’ scheme in these cases and say something like ‘Expected behaviour: obvious.’ However, special care should be applied and this approach reserved for rare cases.
Reproduction and Workaround
Advice on the content of bug reports can be found in any good textbook. There is no need to repeat them here. However, there are two aspects of bug reports that are essential, yet, in most cases, only very superficially described in the literature.
Reproduction Conditions
Test case definitions and bug reports alike are meandering in the open field between the very concrete and the very abstract. Test case execution is always very concrete, i.e. there is no way around providing your test object with specific input. The test case definition should ensure that you’d still cover the breadth of possible things that could go wrong.
Once you found a bug, however, you only know one thing: That this specific execution with these specific input parameters and this specific configuration failed against this specific testing environment. You may, of course, put that into a report and consider your job done. But then everyone touching the ticket will need to go through the same set of open questions, which is highly inefficient: What about other parameters? What about other configurations? What about other software versions? What about other environments? What about other test data?
This is why you as an issue creator should go from ‘concrete’ to ‘abstract’ here. Try to exclude as many options as possible, thereby making the report as abstract as possible. After all, ‘the bug’ is not that, say, app version 1.23 was unable to solve calculation ‘1+1’ on environment x, but rather that, for example, all app versions deployed after some time t on any of the testing environments fail to implement the addition functionality correctly. That’s your bug, so your description should faithfully mirror this.
This will help in finding its root cause. Plus, it will make your job as a tester easier. It may sometimes happen that if you ‘complain’ in a bug report that ‘1 + 1’ didn’t work, the developer would fix that bug by letting the application provide the specified solution of this calculation. But you had in mind that the addition algorithm should be fixed! Yeah, but that’s not what you reported. How could the developer had known? Please, don’t be fooled by the fact that I’m working with toy examples here. In fact, in more complicated real-life scenarios these things happen consistently.
Back to the Concrete Level
So, what do you do after you went from ‘concrete’ to ‘abstract’? You return. You go back to ‘concrete’. The reason is very simple. You executed your test case with concrete combinations of test data, configurations, parameters, etc. Now, if your bug report contained just abstract description of unexpected behaviour—in fact, according to the above, the most abstract depiction that you could think of—then everyone trying to reproduce this kind of behaviour (e.g., developers, designers, test managers, product owners) would need to convert your description into something executable. This can be very cumbersome. Of course, there can also be cases where it is very straightforward. In any case, it will never as easy again to define easily applicable reproduction conditions than in that very moment where the bug is at hand.
When providing concrete reproduction conditions—for example, the ones used when the test failed initially, but also any other configuration that fails reliably—it is important to keep in mind that some conditions age. For instance, it might be that the failure depends on specific test data in your system that is not present anymore after resetting the testing environments. This is particularly relevant in contexts where environment resets happen regularly, be it automatically or manually. Then, there are essentially two options. Firstly, think about reproduction conditions that are sufficiently concrete yet valid irrespective of resets and other time-dependent factors. Secondly, if, for whatever reason, this is not feasible, then just call the relevant developer, share your screen, and show him ‘live’ the unexpected behaviour. If you did so, it makes sense to even mention it in the report so that you remember when you pick up on this bug after some time.
Workaround
Workarounds are highly relevant for bug reports for a variety of reasons. The most important one is probably that they typically influence severity estimations. Unexpected behaviour for which there is a reasonable workaround are significantly less pressing than the ones that miss such a solution.
Workarounds are also relevant for testing itself. Sometimes, a whole bunch of test cases are blocked from execution if no workaround for a failing test exists. Therefore, it is much appreciated both by product owners, project leads, etc. and test managers, other testers, etc. when workarounds are stated explicitly.
As far as severity estimations and workaround options are concerned, two things need to be kept in mind. One is that the workaround needs to be reasonably obvious and simple. If there is a theoretical workaround but an end user would have trouble finding or applying it, then this piece of information, again, is highly relevant for subsequent estimations. Another one is that the workaround should be available from the end user’s perspective. If there is a workaround that requires intervention by, e.g., operations or technical expertise or certain privileges, it might still be relevant for testing blockers, but it will be less relevant when it comes to severity.

No responses yet