Stairway to Heaven

Whenever we learn a new skill, and go deeper, it feels like we’re climbing a stairway to mastery.

A few years ago, I described how people get into unit testing, and their journey along the path, as walking up a staircase. I’ve seen many people go through this.

Our stairs are very slippery. In every step, we can slip and fall off the stairs. That’s why we get less and less people on the top stairs. Persistence and discipline keep us moving up.

Most people start by hearing about unit testing. They’ve  heard about it in a conference, or read about it in a blog, but never tried it before. Early adopters don’t stay on that step for long, because they are eager to try it. But most people stop there and wait until an opportunity comes up to try it. This can be a new project, or a new team that has the capability to learn. Then they jump in the pool.

When they get to the trial stage, lots of people fail. Specifically, it’s not that they fail, but they conclude that unit testing is not for them. It’s easy to find excuses for this – technology, time for learning, and the fact that it’s damn hard. So many slip off the stairs at this point. They will probably not start climbing back again, and will dissuade others from taking the trip.

Those that have succeeded and managed to see the value of unit testing, usually are crazy about it. They bump into another problem: Other people don’t understand them. It’s really hard to imagine going back to no tests, they think: “How do these people live? Why aren’t they using this simple but valuable tool”?

There’s an actual dissonance after you’ve seen the light, and it feels worse because it’s really hard to transfer this feeling to other people. When successful, unit testing becomes a team practice, sometimes even an organization practice.   Try to explain it to your friends in another organization and they will look at you funny.

The climbers (now we’re talking dwindling numbers) who actually get to learn and use TDD, get to it after they have some experience in test-after unit testing.  Usually you don’t get to start with TDD, and when you do, like it happened to me, it’s bound to fail. To assimilate this practice takes experience and discipline.

Plain TDD is hard. If you’re working with legacy code, you have a lot of work ahead of you. Sometimes, work you’re not willing to put in. It’s ok, but that’s where the cycle of failure begins again. With the same excuses: technology, time, and it’s hard. If you happened to succeed, urging others to join is bound to fail.

The few, the happy few, who manage to get on top, are in testing nirvana. Not only they control how they work, they influence others to work according to what they find as a best practice. Not only is it very hard to find someone like that, they aren’t really happy to leave their place.

What happens if one of this group moves to another organization, where the people don’t write tests? Even for a good reason to train them to start moving up their staircase?

Culture shock. It’s the same clash we saw before, but multiplied. Before, you’ve seen the light and others didn’t. Now you are holding the flashlight, and people still don’t get it.

What does it all mean?

A couple of things.

  • It is a journey. It’s long, hard and most of all, personal. Unit testing not just a set of tools, you improve and learn, and it’s hard to convey this progress to others.
  • We tend to work with people in our level, or at least at the same openness level. We struggle otherwise
  • Switching between teams causes an actual system shock. There’s a gap people need to reach over and they usually don’t.

It also means that an organizational decision of “everyone starts writing tests tomorrow” is bound to fail. Select the “ready” group first, those that are willing to try, and help them move forward. Plant the seed of awareness in other teams, so they will be ready. When they are, get mentors from the advanced (but not too advanced) teams. Grow everything slowly together.

It is a long tough process.

Get someone , like me, to help manage the process and mentor the people.

Image source: http://pixabay.com/en/dike-stone-stairway-stairs-sky-389587/

Everyday Unit Testing – New Chapter Released!

It’s the “Unit Testing Economics” chapter with an insight on how we actually make decisions in the unit testing adoption process. There’s also a bonus section on the Organization Strategy.

I’m planning the end the “free” period soon, so get it while you can, or else… pay money.

Read more about it the “Everyday Unit Testing” blog.

How To Waste Estimations

We like numbers because of their symbolic simplicity. In other words, we get them. They give us certainty, and therefore confidence.

Which sounds more trustworthy: “It will take 5 months” or “It will take between 4 to 6 months”?

The first one sounds more confident, and therefore we trust it more. After all, why don’t you give me a simple number, if you’re not really sure about what you’re doing?

Of course, if you knew where the numbers came from, you might not be that confident.

Simple and Wrong

In our story, the 5 month estimation was not the first one given. The team deliberated and considered, and decided that in order to deliver a working, tested feature, they would need 8 months.

“But our competitors can do it in 4 months.” shouted the manager.

“Well. maybe they can. Or maybe, they will do it in half the time, with half the quality” said the team.

“It doesn’t matter, because they will get the contract!” the manager got angrier.

The team had nothing to say. They left the room for half an hour. Then the team-lead came back and said very softly: “We’ll do it in 5 months”.

Wastimates

The project plan said 5 months. No one remembered the elaborate discussion that went down the toilet.

Estimates are requested for making decisions. In our story, the decision was to overrule them. There’s not really isn’t a way of knowing who was right. Many companies thrived by having a crappy v1 out, and then fixing things in v2. They might not have gotten to v2 if they had done v1 “properly”. Then again, it might be that they bled talent, because the developers were unhappy, or not willing to sacrifice quality. Who knows.

The point is: the effort put into estimation should be small enough to provide the numbers to management. If the team had reached the “8 months” answer after 1/2 day, rather than after dissecting the project dependencies up front for 2 weeks, they would have more time to work on the actual project.

By the way, they might not get the go ahead. And they would work 2 weeks more on other projects. That’s good too.

Don’t waste time getting to good enough estimates. Estimate and move on to real work.

PS: Based on a true story.

Image source:  http://artoftu.deviantart.com/art/Wasteland-93333090

The Code Kidnapper

Many managers and developers think about improving productivity. As with many things, productivity is in the eye of the beholder.

We used to measure productivity in lines of code. That led very quickly to believing that the developers produce what they type. Well, what do they do the rest of the time?

In fact, if they are producing code, we can do something better: We’ll get the smart developers, and call them architects. They cost a lot of money, but it’s worth it, because they can think, and we don’t need many of those. They can just put their thoughts on paper. Now we’ll take the not-so-smart ones and hand them what the architect has thought of, and they will do the work. And get this, we’ll pay them less, because they just need to type!

I just saved the company millions!

It may sound stupid, but you'll still find people believing that.

The main bottleneck, as well as the productivity engine of developers is their mind.

To increase our productivity, we’re looking to download what we’re thinking into the computer in a fluent way.

We call that flow. And we’d like to maintain it. When we’re in that zone, we can produce software very quickly. In fact, the wonderful productivity tools we have to automate the repeated operations we do, help us not get stuck once this engine is running.

Working with tests speeds things up because of the continuous feedback we get on our work. The more tests we have, we get better, quicker feedback, which keeps us on track.

If tests are the gas paddle, what are the brakes?

Enter the debugger.

This is the one of the most sophisticated tools there are, allowing developers to find problems. As technology moved forward, debuggers became more feature-heavy and precise, and gave us all we needed to isolate and find problems. It's almost like toolmakers say: “It's alright, you can make mistakes now, we'll help you solve them later”.

They hold our code hostage, and we’re giving in to these terrorists.

Because that “later” is costly. Debugging an application take a very big chunk of a developer's time. Without tests, it is the only way, to find where a bug occurred. Sure, you can print out logs, and go through them, but where’s the fun in that?

Instead, you need to wait, and repeat, and after you fix the problem, debug it again. And maybe do that again, and again, until you’re sure the problem was fixed.

Without small tests, we need to debug the entire system. With small tests, we need to debug a lot less. In fact, just by looking at the test code we might understand the scenario, and relate it to the code. It's like short-circuiting the process of "solving the problem".

Working with unit tests minimizes the time spent in the debugger. That's a huge saving in time, time we can direct to writing more features, and deliver them to the customer.

Plus, tearing up that ransom note feels pretty good.

Image source: http://en.wikipedia.org/wiki/Villain

The Hidden Cost Of Estimation

“Why would you want a rough estimate, when I can do a more precise one?”

And really, if we can do something better, why do it half way?

There’s a simple answer, but I’ll give it after the long detailed one.

Let’s start by asking again:

Why estimate at all?

There’s a whole #NoEstimates discussion, whether we need estimations or not. Unless your organization is mature enough to handle the truth, someone will want an estimation, believing he can do something with it: approve the task, delay it, budget for it, plan subsequent operations. That someone needs information to make decisions, and is basing them on the numbers we give.

In reality, unless there are orders of magnitude between expected results in estimation it wouldn’t matter. If we had a deadline for 6 months, and the estimation is 8 months, the project will probably be approved, knowing that we can remove  scope from it. If we estimated a project will take a year, there’s going to be a 3 month buffer between it and the next one, because “we know how it works”. Things usually go forward regardless of our estimation.

If however, we estimate we need 5 times the budget than what we thought we needed, this may cause the project to cancel.

In summary, the upfront estimation serves making decision. In fact, if you just go with the discussion, and leave the number out, you can reach the same decisions.

So why do we need the numbers?

Numbers are good proxies. They are simple, manageable, and we can draw wonderful graphs with them. The fact they are wrong, or can be right in very small number of cases is really irrelevant because we like numbers.

Still, someone high up asked for them, shouldn’t we give them the best answer we can?

Indeed we should. But we need to define what is the “best answer”, and how we get it.

How do we estimate?

How do we get to “it will take 3 months” answer?

We rely on past experience. We apply our experience, hopefully, or someone else’s experience to compare similar projects from our past to the ones we’re estimating. We may have even collected data so our estimates are not based on our bad memory.

Software changes all the time, so even past numbers should be modified. We don’t know how to factor in the things we don’t know how to do, or the “unknown unknowns” that will bite us, so we multiply it by a factor until a consensus is reached. We forget stuff, we assume stuff, but in the end we get to the “3 months” answer we can live with. Sometimes.

How about the part we do know about – we can estimate that one more precisely. We can break it down to design, technology, risk and estimate it “better”.

We can. But there’s a catch.

Suppose after we finished the project, we find that 30% of it, included the “unknown unknowns” stuff.  We could have estimated the other 70% very precisely, but the whole estimation would still be volatile.

(I’m being very conservative here, the “unknown unknowns” at the time of estimation is what makes most of a project).

The simple answer

So here is what we know:

  • Estimation is mostly wrong
  • People still want them
  • It takes time to estimate
  • Precise estimation costs more
  • Precise and rough estimation have the same statistical meaning because of unknowns.

That means that we need “good enough” estimates. These are the ones that cost less, and give a good enough, trusted basis for decision for the people who ask for it.

Fredkin’s Paradox talks about how the closer the options we need to decide between, it takes longer for us to decide, while the difference in impact of choosing between the two becomes negligible. Effective estimation recognizes the paradox, and tries the fight it: because the impact of variations in the estimates, there’s no need to further deliberate them.

If you get to the same quality of answer, you should go for the cheaper option. Precise estimates are costly, and you won’t get a benefit from making them more precise. In fact, as a product manager, I wouldn’t ask for precise estimates, because they cost me money and time not being spent on actual delivery.

Working software over comprehensive documentation, remember?

 

Image source: https://openclipart.org/detail/183214/archery-target-by-algotruneman-183214

The Economics of Unit Testing

Unit testing is a set of skills, that rarely appears on a resume. When I saw a resume with unit testing on it, it rose up to the top of the interview queue. I understood the person who put it there understand what it means to the business.

If the organization already took the red pill, it sees unit testing like this:

Unit testing shortens the current and future release cycles at the expense of initial extra development work.

It's not about quality, or maintenance, or other things we'll dive into more in a minute - it's about money: Shortening the release cycles means earlier income. For the next releases it also means less maintenance money and more money from new features.

The extra development time is not really just in the beginning. It is definitely a bump in the beginning, because the team needs to learn new skills and tools. Once they do learn it, and make it a work process development time even drops. Industry numbers talk about ~20% more work, but this is really subjective.

Let's take an example a project which takes D time to develop. If the project manager is smart, she'll plan an integration period I and testing time T.

With unit testing effort, D increases,  while I and T reduce. This is because bugs get caught earlier, and are also easier to debug, because of the tests.

But this is only for the first release. Let's see what happens in the next release. Without unit testing, development time is now longer, because we need to fix bugs we didn't fix in the first release. Also, integration and testing are now longer, because we're breaking functionality that worked before and needs fixing again.

So zooming out a bit, two cycles look like this:

That means that with each subsequent release, for basically the same scope, we're delaying the release, making the project late.

With unit testing effort, in the second release there's no additional effort on the Development side, it's now just part of the process. In addition, there are less bugs from the early release, and integration and testing for this release are also shorter, because there are less bugs from this implementation.

The initial investment in training the team for unit testing (and TDD if you’re up  to it) may seem that it delays releases. In fact, it does the complete opposite.

The numbers may change, the knowledge and experience may change. The process I illustrated, however, remains the same.

If you thought about how to sell unit testing to business people, this is how you do it.

Image source: http://marketingbones.com/understanding-economics/

Metrics: Good VS Evil

Pop quiz: What do earned value, burn-down charts, and coverage reports have in common?

They are all reporting a metric, sure. But you could have gotten that from the title. Let’s dig deeper.

Let’s start with earned value. Earned value is a project tracking system, where you get value points as you’re getting along the project. That means, if you’re 40% done with the task, you get, let’s say, 40 points.

This is pre-agile thinking, assuming that we know everything about the task, and nothing can’t change on the way, and therefore those 40 points mean something. We know that we can have a project 90% done, running for 3 years without a single demo to the customer. From experience, the feedback will change everything. But if we believe the metrics, we’re on track.

Burn-down charts measure how many story points we’ve got left in our sprint. It can forecast if we’re on track or how many story points we’ll complete. In fact, it assumes that all stories are pretty much the same size. First, the forecast may not tell the true story, if for example the big story doesn’t get completed. And somehow the conclusion is that we need to improve the estimation, rather than cut the story to smaller pieces.

Test coverage is another misleading metric. You can have a 100% covered non-working code. You can show increment in coverage on important, risky code, and you can also show drop in safe code. Or you can do the other way around, and get the same numbers, with the misplaced confidence.

These metrics, and others like them have a few things in common.

  • They are easy to measure and track
  • They already worked for someone else
  • They are simple enough to understand and misinterpret
  • Once we start to measure them we forget the “what if” scenarios, and return to their “one true meaning”

In our ever going search for simplicity, metrics help us minimize a whole uncertain world into a number. We like that. Not only we don’t need to tire our mind with different scenarios, numbers always come with the “engineering” halo. And we can trust those engineers, right?

I recently heard that we seem to forget that the I in KPI is indicator. Meaning, it points in some direction, but can also take us off course, if we don’t look around, and understand the environment.

Metrics are over-simplification. We should use them like that.

They may have a halo, but they can also be pretty devilish.

 

Image source: http://www.stuffmonsterslike.com/2013/09/20/293-monsters-like-playing-the-devils-advocate/

Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More