Code Coverage And Other Lies We Tell Our Selves

Gil Zilberfeld talks about complexity and silver bullets concerning code coverage and TDDSo what’s up?”

“Well, we’re really glad that you’re helping us with TDD, and we see a lot of progress. People not only continue to write tests, they get annoyed when others don’t.”

Excellent. And we’re meeting today because…?

“The tests people are writing are good, and they definitely increase the coverage, but we’re so far off our goal. What can we do to increase our coverage?”

I see. (Smiling on the outside, crying on the inside)

So I begin my coverage rant. Including (but not limited to):

  • How coverage of 0% is definitely bad, but 100% doesn’t mean much
  • How you can get that 100% coverage in a very costly manner, and still it won’t mean much
  • How you can get that 100% coverage in a very cheap way (skip the asserts), and it won’t mean much (obviously)
  • How you can cover code completely, and it would be a big waste, since the feature is not used by the users
  • How everybody uses coverage tools, because frankly that’s the easiest thing to measure

I would say that they were amazed at my logical explanation, and finished the meeting early, promising to remove all coverage metrics from their system.

But of course, that is completely false.

Fresh out of silver bullets

“But management wants to see the coverage numbers going up”.

Of course they do. And why is that?

“Well, that’s a simple proxy for quality, isn’t it?”

I could see the disappointment in their eyes. They expected other answers. Simpler, easy to implement, expert answers.

In Cynefin language, our friends were asking for a solution for a complicated problem, but they live in a complex system. In the complicated realm, we ask an expert how to solve a problem, and the expert gives you the solution. If not, we find another expert, but the answer has to be out there, since the knowledgeable expert has seen and solved this problem a million times before.

Alas, we’re living in a complex world, where there is no “right” answer. No silver bullet for you.

In the complex domain, we need to find our own solution, the one that fits our context. Applying the expert advice as “best practice” which doesn’t take our context into consideration, is asking for trouble, and sets us up for disappointment. Trust me, it’s a nice, big disappointment, covered with lots of tears and pain. Just waiting to happen

But it can get even worse, when we ask the wrong question. Our friends asked “how do we get our coverage numbers up”. I’ve mentioned a few options during my rant, but hopefully they won’t take those suggestions too seriously.

What do we really want?

The hidden request behind “how can we increase our coverage” can be either:

1. We want better quality.
2. We want to be seen as having better quality.

And now we have to decide.

If we seek better quality, we need to work at it. The real question we should ask is “how do we improve”. Since we’re in a complex world, there are no recipes, but there are suggestions to try. Our concept of quality and improvement may differ wildly from others’, and so there may be different paths to get there.

(By the way, there is no one tool that will solve our specific problems, because tools are built to solve “everyone’s” problems. Since there are no silver bullets, we have to work at it hard – define what matters, define the metrics, track the metrics, and decide based on the findings if you’re going forward, backwards or sideways.}

If however, it’s the second goal we’re after, that’s a completely different story. We can improve our statistics in many creative ways. In fact, one of them is actually going through improvement.

But that would be too much work, right? All we need is to get management off our back so we can get back to “real work”.

There are cheaper methods to make us look like we’re improving. It just so happens, that those methods don’t align with the project’s quality goals.

No big deal, is it?

Leave A Reply

Your email address will not be published. Required fields are marked *