In an enterprise software development context, initiatives to maintain, improve and measure the quality of deliverables are common and welcome, sometimes smart and judicious. Nevertheless, they may not solve anything by addressing the wrong problem, because measurement tools are misunderstood or inappropriate, or even used for a deflected purpose to build a better image, in all conscience or out of ignorance.
This article discusses the misuse of the coverage percentage calculated when performing unit tests on the productive code of a software product, as a performance indicator and acceptance index, to raise awareness among development teams about maintaining quality and productivity.
“To improve the quality of the software code, we will ask developers to systematically cover their productive code with 82.5% unit testing.”
The implementation of such a measure must be strongly questioned in order to highlight the inappropriate use of a metric as a performance indicator to deal simplistically with a profound problem.
Another example of a misused metric is the number of lines of code produced or the number of commits executed by a developer as an indicator of his daily performance.
Consequently, it is important to correctly define the notion of software quality, the utility of writing unit tests upstream, as well as the productivity of an active software development team.
In Complex Thinking, a development team is considered a complex adaptive system. By imposing an extrinsic constraint on a team, it will adapt, and play with the performance indicator in order to focus on delivering values. Indeed, it is common to see acceptance or integration tests replace unit tests since the minimum coverage percentage required is reached; in this case, the rapid feedback offered by unit tests is ignored and regressions due to micro-cracks will take more time to emerge. Another example is that of concentrating the effort to test parts of the productive code without complexity and without any real risk of regression, simply to increase the percentage of coverage; consequently, as soon as the required minimum percentage of coverage is reached, it is frequent to see the effort released to the detriment of the most important parts having the greatest impact on the delivered functionalities.
Code coverage is only the sole of the true problem we try to achieve.
By using the 5-Why resolution method to determine the main cause of the problem encountered, it can be concluded that the quality of the software is not measured by the number of lines of code covered at the time of testing, but by the quality of the tests themselves. A test that covers part of the code is useless if it cannot trigger any regression alerts.
The #Craftsmanship and #XP mindsets promote Test-Driven Development (TDD) for code writing and discourage writing a line of code in the absence of a failed unit test.
TDD is much more than just a test.
The TDD technique allows the code to be documented while specifying the expected behavior. TDD allows you to develop in small increments in complete safety and with rapid feedback. An underestimated ability of the TDD is the contribution to software design during writing. Software craftsmen who use TDD do not need to use a coverage constraint to produce quality code because the coverage is naturally very high. Consequently, any line of code not covered by the tests is probably dead code.
If you want to learn more about TDD, I would more than recommend the new video from CleanCoders, especially the last series between Uncle Bob and Sandro Mancuso
Otherwise come join a Software Craftsmanship community near you, where you will meet other experts.