I’m constantly encouraging developers around me to embrace test-driven development and automated unit testing. However, there’s one very important thing I’ve neglected to include in my evangelizing: a definition for what constitutes a good unit test.
A big problem that I think a lot of developers run into is that they’re writing tests but aren’t realizing any value from them. Development takes longer because they have to write tests, and when requirements change it takes longer because they have to refactor tests. Maintenance and warranty work is slower, too, because of the additional upkeep from failing tests created with every little change.
These problems mostly exist because of bad unit tests that are written due to insufficient knowledge of what makes a good test. Here’s a list of properties for a good unit test, taken from The Art of Unit Testing:
- Able to be fully automated
- Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed)
- Can be run in any order if part of many other tests
- Runs in memory (no DB or File access, for example)
- Consistently returns the same result (You always run the same test, so no random numbers, for example. save those for integration or range tests)
- Runs fast
- Tests a single logical concept in the system
- Readable
- Maintainable
- Trustworthy (when you see its result, you don’t need to debug the code just to be sure)
This is a great list that you can use to gut-check your unit tests. If a test meets all of these criteria, you’re probably in good shape. If you’re violating some of these, refactoring is probably required in your test or in your design, particularly in the cases of Has full control over all the pieces running, Runs in memory, and Tests a single logical concept in the system.
When I see developers struggling with unit tests and test-driven development, it’s usually due to test “backfilling” on a poorly-designed or too-complex method. They don’t see value because, in their minds, the work is already done and they’re having to do additional work just to get it unit tested. These methods and objects are violating the single responsibility principle and sometimes have many different code paths and dependencies. That complexity makes writing tests hard because you have to do so much work upfront in terms of mocking and state preparation, and it’s really difficult to cover each and every code path. It also makes for fragile tests because testing specific parts of a method rely on a test’s ability so successfully navigate its way through the logic in the complex method; if an earlier part of the method changes, you now have to refactor unrelated tests to re-route them back to the code they’re testing. (Tip: Avoid problems like this by writing tests first!)
Whether you’re just getting with unit tests or a grizzled veteran, you can benefit from using this list of criteria as a quality measuring stick for the tests you write. High-quality tests that verify implemented features will result in a stable application. Designs will become better and more maintainable because you’ll be able to modify specific functionality without affecting the surround system the same way that you’re able to test it. You won’t have to worry about other developers breaking functionality you’ve added, and you won’t have to worry about breaking functionality they’ve added. You’ll be able to make a modification that affects the entire system with a high-level of confidence. I’d venture to say that virtually every aspect of software development is better when you’ve got good tests.
