The Economics of Unit Testing

Unit testing is a set of skills, that rarely appears on a resume. When I saw a resume with unit testing on it, it rose up to the top of the interview queue. I understood the person who put it there understand what it means to the business.

If the organization already took the red pill, it sees unit testing like this:

Unit testing shortens the current and future release cycles at the expense of initial extra development work.

It's not about quality, or maintenance, or other things we'll dive into more in a minute - it's about money: Shortening the release cycles means earlier income. For the next releases it also means less maintenance money and more money from new features.

The extra development time is not really just in the beginning. It is definitely a bump in the beginning, because the team needs to learn new skills and tools. Once they do learn it, and make it a work process development time even drops. Industry numbers talk about ~20% more work, but this is really subjective.

Let's take an example a project which takes D time to develop. If the project manager is smart, she'll plan an integration period I and testing time T.

With unit testing effort, D increases,  while I and T reduce. This is because bugs get caught earlier, and are also easier to debug, because of the tests.

But this is only for the first release. Let's see what happens in the next release. Without unit testing, development time is now longer, because we need to fix bugs we didn't fix in the first release. Also, integration and testing are now longer, because we're breaking functionality that worked before and needs fixing again.

So zooming out a bit, two cycles look like this:

That means that with each subsequent release, for basically the same scope, we're delaying the release, making the project late.

With unit testing effort, in the second release there's no additional effort on the Development side, it's now just part of the process. In addition, there are less bugs from the early release, and integration and testing for this release are also shorter, because there are less bugs from this implementation.

The initial investment in training the team for unit testing (and TDD if you’re up  to it) may seem that it delays releases. In fact, it does the complete opposite.

The numbers may change, the knowledge and experience may change. The process I illustrated, however, remains the same.

If you thought about how to sell unit testing to business people, this is how you do it.

Image source:

Metrics: Good VS Evil

Pop quiz: What do earned value, burn-down charts, and coverage reports have in common?

They are all reporting a metric, sure. But you could have gotten that from the title. Let’s dig deeper.

Let’s start with earned value. Earned value is a project tracking system, where you get value points as you’re getting along the project. That means, if you’re 40% done with the task, you get, let’s say, 40 points.

This is pre-agile thinking, assuming that we know everything about the task, and nothing can’t change on the way, and therefore those 40 points mean something. We know that we can have a project 90% done, running for 3 years without a single demo to the customer. From experience, the feedback will change everything. But if we believe the metrics, we’re on track.

Burn-down charts measure how many story points we’ve got left in our sprint. It can forecast if we’re on track or how many story points we’ll complete. In fact, it assumes that all stories are pretty much the same size. First, the forecast may not tell the true story, if for example the big story doesn’t get completed. And somehow the conclusion is that we need to improve the estimation, rather than cut the story to smaller pieces.

Test coverage is another misleading metric. You can have a 100% covered non-working code. You can show increment in coverage on important, risky code, and you can also show drop in safe code. Or you can do the other way around, and get the same numbers, with the misplaced confidence.

These metrics, and others like them have a few things in common.

  • They are easy to measure and track
  • They already worked for someone else
  • They are simple enough to understand and misinterpret
  • Once we start to measure them we forget the “what if” scenarios, and return to their “one true meaning”

In our ever going search for simplicity, metrics help us minimize a whole uncertain world into a number. We like that. Not only we don’t need to tire our mind with different scenarios, numbers always come with the “engineering” halo. And we can trust those engineers, right?

I recently heard that we seem to forget that the I in KPI is indicator. Meaning, it points in some direction, but can also take us off course, if we don’t look around, and understand the environment.

Metrics are over-simplification. We should use them like that.

They may have a halo, but they can also be pretty devilish.


Image source:

Superman VS Batman: The Agile Version

I used to be Superman.

I could do anything I wanted, and no one would tell me I’m wrong. For good reason: I usually wasn’t wrong.

I wasn’t born Superman. I worked hard at it. I learned a lot. I was leading by example. And when I was the smartest guy around, who actually accomplished things, I became Superman.

Let me tell you, it feels great.

But Is it good for business?

We all know about the bus factor, the number of people that if hit by a bus, halts the project. Super-people tend to hold knowledge, that other people don’t have. Maybe it’s ego or competition, or maybe it’s just because others don’t want that knowledge. They feel safe that if something happens, Superman will swoop in and save the day. Sometimes, Superman is not around. Then there’s trouble.

It gets worse, though.

Superman can be wrong. And when Superman makes a mistake, it can be a crucial mistake for the organization.

If our Superman is an architect, and he makes a bad architectural decision for the entire project, it can cost millions. Or, if she’s a team lead and decides on a new process for the team, a bad process can cause the team slow down, and sometimes break up.

Holy sidekick, Batman!

None of us are mistake-proof. Even if we, or others, tell ourselves that.

Agile talks about short feedback loops. Feedback is a good start, but it’s not enough.

People get to the Superman throne because of their expertise. We tend to appreciate expertise in a knowledge-based environment. In order to discuss something with Superman, we need to get high enough to his level, or at least that he notices us and takes our advice to heart.

In short, we don’t need Superman. We need Batman and Robin.

That’s not easy, from an organizational point of view. Imagine how long it took to get one superhero, and bring him to that level. Now we need two?

This is just like any other system with a single point of failure. It’s pure risk management. If you’re aware of the risk, you can take action. Organizations that manage their risks well, not only know who their supermen are. They also put strong, smart people next to them, to encourage discussion and to counter their superpowers.

Superman may not like it, and that’s ok.

After all, you need Batman.

Image source:

Why Testers Are Losing The ISO 29119 Battle

The ISO 29119 is making waves in the testing communities, regarding its content and necessity. Its focus on test planning and documentation gets modern testers to ask why.

I’ve worked in an ISO certified company, so maybe I can shed light from my past experience.

  • ISO standards are written by committees, which are made of people. We’re assuming that these people are experts, but mainly they are not. Some of them are academics who didn’t experience real life work.
  • There’s a big hidden assumption, that adhering to the standards ensures quality and stakeholder requirements. This is why management gives them the go ahead, apart from the certificate on the wall.
  • ISO certification are for organizations, not people. Most of the work in handling the acceptance, checks and certification are not by done by the practitioners. For example, in my former company, the person in charge of adhering to the ISO standards in R&D was not a developer or tester, and was outside on R&D. She was very good at what she did, but did not have any product development experience.
  • There’s a great fear of not failing to meet the ISO inspections, and therefore they rather go specifically according to a standard, which is specific about what they need to follow, if there is one.
  • ISO is really simple: Define your process, prove you adhere to it, get certified. In my former life, all development procedures were NOT dictated by ISO, but by the organization internally. They were derived indirectly from ISO 9000. To get the annual certificate we needed to show proof that we actually did what the procedures said.
  • As non-practitioners lead the process, there’s a need to create a common ground in terms of process and language. How would a general QA manager (non-tester) know that the development team are doing code reviews? The practitioners need to document it.  In the ISO 29119 case, the documentation is of test planning and documentation. These are the easiest to put in process form, as real testing skills are hard to put into process, it’s an easy way out. Don’t worry, it’s not just testing, but every high-skill suffers the same fate.
  • Of course, test plans and test result documentations do not guarantee any quality, or guide people to test better.
  • Finally, the environment is there too: Even if you’re not in a regulated industry, which is required to meet standards, you’ll find a competitor who does get the certificate, and will push your organization for taking this on too. The ISO institute gets money from the inspection so they have an interest to push the standards to be adapted by many.

Let’s summarize:

An organization wants the certification because they need it, or believe it will help their quality. They are looking for the best, simplest and less risky way to get it. The ISO organization gives the simplest common ground that works with “documentation as proof” concept. Everybody’s happy.

Except the practitioners.

The uproar against traditional ISO standards is not new. When we decided that code reviews is needed, we had to document them, just so we can meet the standard. I needed to sneak in “documentable” features and tweak the form to pass an inspection. And all I wanted is an abstract framework for doing code reviews, where people pair and inspect the code for problems, not the location of braces. But that’s life. I had to compromise.

There’s a whole system out there that works fine for most people. A vocal minority won’t change that.

The antidote does not come from regulation and standards. It comes from the responsibility of the organization to learn and improve.

Do you really want to be just a standard organization, or a bit more?

Everyday Unit Testing: New Version!

To know what’s in it, and what’s coming next, read the “Everyday Unit Testing” blog.

If you haven’t gotten your copy yet, go get it, while it’s still free.

Feedback is welcome!

Image source:

What Is An MVP?

We describe MVP as a minimal viable product, and sometimes we turn the definition into minimal marketable product. Regardless of how you look at it, it’s a product that gets attention, if not money.

The MVP can be a first stop in the life of a product. It can also be the last one – if there’s no interest in it, why bother developing it further?

An MVP can sell, if it’s interesting enough. Or it may provide feedback that requires us to pivot and create a new different MVP.

The thing is, we need to define what we’ll consider a success or failure, in order to move forward.

Last week, I promoted the “Everyday Unit Testing” book on Twitter:

I got this response from Peter Kofler:

At this time, the book included just the tools section. It’s just one chapter.

Is it an MVP?

Peter’s opinion is that a book about unit testing, with just a tool section, is not an MVP. He expects more from a book on such a large scope.

For me, one chapter was enough to see if there’s an interest enough for me to continue writing the book. In that context, the experiment is successful. In fact, although the book is free, there are already some customers who have paid for it.

So is it, or is it not an MVP?

The viability of the product is really contextual. It depends on the market, the time it comes out, competition, and many others. I believe Peter (having met himSmile) is a part of my market for the book , but for him, minimum means more.

What you need to do is define your expectations from your product. What the words Minimum, Viable and Product means for you. This is your experiment, and only you can check whether it has succeeded or failed, and what to do next. Once it’s out there, you get feedback. It may not be what you expected, maybe because the customers expected something else.

And then it’s your turn again.

In my case, the next chapter of “Everyday Unit Testing” will be out in  a few days.

Image souce:

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them.

We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation.

That’s code isolation. But test isolation is different.

An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We've already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it.

Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests.

To give an example, I summon the witness for the prosecution: The Singleton.
Here’s some basic code using a singleton:

public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); } public static Counter GetInstance() { return instance; } public int GetValue() { return count++; } }

Pretty simple: The static instance is initialized in a call to Init. We can write these tests:

[TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); } [TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); }

Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case.

By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we've written them, but not necessarily.

While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run.

Do you know this trick?

public class MyTests: BaseTest { ///...

Why not put all common code in a base class, then derive the test class from it?

Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue.

Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”.

In order to avoid isolation problems:
  • Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test.

  • Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations.

  • Don’t inherit. Test base classes create interdependence and hurt isolation.

  • Mocking. Use mocking to control any shared dependency.

  • Clean up. Make sure that tests clean up after themselves. Or, instead before every run.
Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.
Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More