... | @@ -8,16 +8,47 @@ |
... | @@ -8,16 +8,47 @@ |
|
# Tests
|
|
# Tests
|
|
|
|
|
|
|
|
|
|
#### N1: Choose Descriptive Names
|
|
#### T1: Insufficient Tests
|
|
|
|
|
|
Don't be too quick to choose a name. Make sure the name is descriptive. Remember that meanings tend to drift as software evolves, so frequently reevaluate the appropriateness of the names you choose.
|
|
How many tests should be in a test suite? Unfortunately, the metric many programmers use is "That seems like enough." A test suite should test everything that could possibly break. The tests are insufficient so long as there are conditions that have not been explored by the tests or calculations that have not been validated.
|
|
|
|
|
|
This is not just a "feel-good" recommendation. Names in software are 90 percent of what make software readable. You need to take the time to choose them wisely and keep them relevant. Names are too important to treat carelessly.
|
|
#### T2: Use a Coverage Tool!
|
|
|
|
|
|
Consider the code below. What does it do? If I show you the code with well-chosen names, it will make perfect sense to you, but like this it's just a hodge-podge of symbols and magic numbers.
|
|
Coverage tools reports gaps in your testing strategy. They make it easy to find modules, classes, and functions that are insufficiently tested. Most IDEs give you a visual indication, marking lines that are covered in green and those that are uncovered in red. This makes it quick and easy to find `if` or `catch` statements whose bodies haven't been checked.
|
|
|
|
|
|
|
|
#### T3: Don't Skip Trivial Tests
|
|
|
|
|
|
|
|
They are easy to write and their documentary value is higher than the cost to produce
|
|
|
|
them.
|
|
|
|
|
|
|
|
#### T4: An Ignored Test Is a Question about an Ambiguity
|
|
|
|
|
|
|
|
Sometimes we are uncertain about a behavioral detail because the requirements are unclear. We can express our question about the requirements as a test that is commented out, or as a test that annotated with `@Ignore`. Which you choose depends upon whether the ambiguity is about something that would compile or not.
|
|
|
|
|
|
|
|
#### T5: Test Boundary Conditions
|
|
|
|
|
|
|
|
Take special care to test boundary conditions. We often get the middle of an algorithm
|
|
|
|
right but misjudge the boundaries.
|
|
|
|
|
|
|
|
#### T6: Exhaustively Test Near Bugs
|
|
|
|
|
|
|
|
Bugs tend to congregate. When you find a bug in a function, it is wise to do an exhaustive
|
|
|
|
test of that function. You'll probably find that the bug was not alone.
|
|
|
|
|
|
|
|
#### T7: Patterns of Failure Are Revealing
|
|
|
|
|
|
|
|
Sometimes you can diagnose a problem by finding patterns in the way the test cases fail. This is another argument for making the test cases as complete as possible. Complete test cases, ordered in a reasonable way, expose patterns.
|
|
|
|
|
|
|
|
As a simple example, suppose you noticed that all tests with an input larger than five characters failed? Or what if any test that passed a negative number into the second argument of a function failed? Sometimes just seeing the pattern of red and green on the test report is enough to spark the "Aha!" that leads to the solution. Look back at page 267 to see an interesting example of this in the `SerialDate` example.
|
|
|
|
|
|
|
|
#### T8: Test Coverage Patterns Can Be Revealing
|
|
|
|
|
|
|
|
Looking at the code that is or is not executed by the passing tests gives clues to why the
|
|
|
|
failing tests fail.
|
|
|
|
|
|
|
|
#### T9: Tests Should Be Fast
|
|
|
|
|
|
|
|
A slow test is a test that won't get run. When things get tight, it's the slow tests that will be dropped from the suite. So do what you must to keep your tests fast.
|
|
|
|
|
|
---
|
|
---
|
|
|
|
|
... | | ... | |