Home / Digitec Coding Practices / Global Coding Standards / Smells and Heuristics - Tests
Tests
T1: Insufficient Tests
How many tests should be in a test suite? Unfortunately, the metric many programmers use is "That seems like enough." A test suite should test everything that could possibly break. The tests are insufficient so long as there are conditions that have not been explored by the tests or calculations that have not been validated.
T2: Use a Coverage Tool!
Coverage tools reports gaps in your testing strategy. They make it easy to find modules, classes, and functions that are insufficiently tested. Most IDEs give you a visual indication, marking lines that are covered in green and those that are uncovered in red. This makes it quick and easy to find if
or catch
statements whose bodies haven't been checked.
T3: Don't Skip Trivial Tests
They are easy to write and their documentary value is higher than the cost to produce them.
T4: An Ignored Test Is a Question about an Ambiguity
Sometimes we are uncertain about a behavioral detail because the requirements are unclear. We can express our question about the requirements as a test that is commented out, or as a test that annotated with @Ignore
. Which you choose depends upon whether the ambiguity is about something that would compile or not.
T5: Test Boundary Conditions
Take special care to test boundary conditions. We often get the middle of an algorithm right but misjudge the boundaries.
T6: Exhaustively Test Near Bugs
Bugs tend to congregate. When you find a bug in a function, it is wise to do an exhaustive test of that function. You'll probably find that the bug was not alone.
T7: Patterns of Failure Are Revealing
Sometimes you can diagnose a problem by finding patterns in the way the test cases fail. This is another argument for making the test cases as complete as possible. Complete test cases, ordered in a reasonable way, expose patterns.
As a simple example, suppose you noticed that all tests with an input larger than five characters failed? Or what if any test that passed a negative number into the second argument of a function failed? Sometimes just seeing the pattern of red and green on the test report is enough to spark the "Aha!" that leads to the solution. Look back at page 267 to see an interesting example of this in the SerialDate
example.
T8: Test Coverage Patterns Can Be Revealing
Looking at the code that is or is not executed by the passing tests gives clues to why the failing tests fail.
T9: Tests Should Be Fast
A slow test is a test that won't get run. When things get tight, it's the slow tests that will be dropped from the suite. So do what you must to keep your tests fast.