Home | All Posts

[24 Sep 2011] But the Tests Ran...

I write this as a bit of a penance and a reminder after a difficult day as a software engineer. A great set of automated tests are critical to building great software. For a test suite to be considered “great” it should likely extend beyond just some quantitative metric like the total number of tests or code coverage percentages. Greatness is additionally defined by the quality of the test set as determined by those who know the code and the features.

Greatness can breed confidence. Confidence to refactor. Confidence to release and deploy.

With all this confidence around, it’s easy to make a different kind of mistake that all that greatness can’t protect you from: a poor or misunderstood design choice. We sometimes choose the less than optimal path purposely and stow it away as accepted technical debt. Other times, the sub-optimal choice occurs without intention and it’s a problem than lies in wait for someone to uncover it. In either case, these kinds of choices can produce scenarios that poke holes in the “great” test suite and the confidence you have because of the “great” test suite won’t allow you to see the side effects coming. The end results are bugs in the next release and your wondering how the test suite passed without you uncovering the problem before the release date.

The general warning here is to not become so reliant on your tests that you use them as the ultimate truth as to the quality of what you are doing.

Home | All Posts