We’ve said continuous delivery testing is critical to succeed with continuous delivery, but that they also can become a massive maintenance burden. This brings us to our first practice:
Practice #1: Write tests that provide a return on investment
Every test we write should provide enough value to offset the maintenance it requires. There are many test smells including useless, unnecessary, duplicated or confusing tests that all detract from the value a test provides. For a continuous delivery pipeline however, the two real killers are fragility and inconsistency.
Every test failure brings our continuous delivery pipeline to a screeching halt. As such, we only want our tests to fail when they’re catching a real bug. Preventing fragility and inconsistency is key.
Tests that constantly fail do so because they have too many reasons to fail. The lowest maintenance tests that provide value have one, and only one reason to fail:
- A single assertion or
- A single verified expectation on a mock or
- Expect an exception
Multiple assertions or expectations are appropriate at times – but we must weigh the benefit against the maintenance cost. More reasons to fail means we have to fix that test more often. It also means the test is more complicated. So it will take longer to decipher failures and figure out how to resolve them.
Roy Osherove’s, The Art of Unit Testing, provides excellent guidance on how to avoid fragile tests. I find overuse of mocks is the most common source of fragility. In general you should only verify commands – methods with side effects – never queries (e.g., getters).
Another very specific cause of fragility is hidden dependencies. Setting properties in tests via Java’s reflection creates dependencies your IDE typically won’t find. Any changes to the property, even its name, will break the test. It is far preferable to have a constructor or setter instead.
Tests that randomly pass or fail when the system hasn’t changed are inconsistent. These are extremely frustrating since they halt our pipeline for seemingly no reason. As such, they are often deleted or ignored, rendering the time to create them wasted. To avoid:
- Avoid tests that depend on ordering
- Avoid random data
- Avoid tests that share state / fixture
Tests requiring a certain order to pass are a time bomb, requiring great effort to diagnose failures. Just say no.
Random data at certain testing levels can be an appropriate way to increase coverage. For most developer written tests however, it’s just unnecessarily dangerous. Instead, use a range of concrete values. In Java, you could use JUnit parameterized tests or theories, or step it up a notch and use a cutting-edge framework like Spock.
Finally, each test should set up the state it requires. Sharing state leads to cross-talk between tests, resulting in failures. If there’s no other choice, run setup and teardown logic in transactions.
As a closing note, be careful not to write unnecessary tests when building up your test suite. This will safeguard your continuous delivery pipeline. Testing getters and setters often is a result of code coverage fanaticism. Your time is better spent elsewhere – test what really matters in your system.
“Prefer pragmatism over dogmatic metrics” – Neil Ford
Next week we’ll define the audiences our tests serve and the role that plays in a continuous delivery pipeline. Hope to see you there!