I Did Scrum and All I Got Was This Lousy Shirt

So this came up a few days ago: “Why Scrum Should Basically Just Die In A Fire”.

My first thought was: “Of course scrum is failing him, he’s not actually doing scrum.”

Second thought was: “He doesn’t understand agile at all, does he?”

Third thought was: “Gil, you’re an idiot.”.

Because I have been saying this was going to happen. Last time was on the “why agile is declining” sequel.

No Surprises

We shouldn’t be surprised that people are struggling with scrum or agile, because change is hard.

Bad experiences, coached or self-inflicted, guide our decisions. We connect the results to the experience, because if there’s a name on it, even better. Replace scrum, with kanban, SAFe or XP and I’m sure you’ll find people with similar experiences, who won’t touch agile again.

We know they are “doing it wrong”. That if only had the right coaching, everything would be better. It doesn’t matter.

Here’s a shocker: scrum never succeeds.

When you hear about successful scrum implementation from the people who went through it, they are no longer talk about bare-bone scrum. They talk about a working process that has similarities to scrum. The process is working for the team, and they are using “scrum” as the name they know. Scrum is a toolbox, and a small one at that. It can’t do the work, and it needs to be adopted to the team, the organization. To the people.

So regardless to what coaches may tell you, scrum can’t work, because it’s only a starting point. And we don’t declare wins and losses at the beginning.

The Conflict

On one hand, we deserve this. If scrum doesn’t deliver, and blows up in the face, regardless of where the fault lies, it’s because we, the agile community are responsible to explain scrum. If we communicated expectations better, and did not make it seem so simple, we wouldn’t be in this mess.

On the other hand, we, or should I say I, know that it works. Not as prescribed, but adapted specifically, gradually to how a team works.

It’s easy selling scrum as a silver bullet, because everybody is doing it. Just not the same.

For some it’s a miserable experience, and they call us on it.

The last thing we should do, is blame them for “not understanding agile”.



Image source: http://agile.logihelgu.com/category/agile/scrum/page/2/

The Measure Of Success

What makes a successful project?

Waterfall project management tells us it’s about meeting scope, time and cost goals.

Do these success metrics also hold true to agile projects?

Let’s see.

  • In an agile project we learn new information all the time. It’s likely that the scope will change over time, because we find out things we assumed the customer wanted were wrong, while features we didn’t even think of are actually needed.
  • We know that we don’t know everything when we estimate scope, time and budget. This is true for both kinds of projects, but in agile projects we admit that, and therefore do not lock those as goals.
  • The waterfall project plan is immune to feedback. In agile projects, we put feedback cycles into the plan so we will be able to introduce changes. We move from “we know what we need to do” to “let’s find out if what we’re thinking is correct” view.
  • In waterfall projects, there’s an assumption of no variability, and that the plan covers any possible risk. In fact, one small shift in the plan can have disastrous (or wonderful) effects on product delivery.
  • Working from a prioritized backlog in an agile project, means the project can end “prematurely”. If we have a happy customer with half the features, why not stop there? If we deliver a smaller scope, under-budget and before the deadline, has the project actually failed?
  • Some projects are so long, that the people who did the original estimation are long gone. The assumptions they relied on are no longer true, technology has changed and the market too. Agile projects don’t plan that long into the future, and therefore cannot be measured according to the classic metrics.
  • Quality is not part of the scope, time and cost trio, and usually not set as a goal. Quality is not easily measured, and suffers from the pressure of the other three. In agile projects quality is considered is first-class citizen, because we know it supports not only customer satisfaction, but also the ability of the team to deliver in a consistent pace.

All kinds of differences. But they don’t answer a very simple question:

What is success?

In any kind of project, success has an impact. It creates happy customers. It creates a new market. It changes how people think and feel about the company. And it also changes how people inside the company view themselves.

This impact is what makes a successful project. This is what we should be measuring.

The problem with all of those, is that they cannot be measured at the delivery date, if at all. Cost, budget, and scope maybe measureable at the delivery date, including against the initial estimation, but they are not really indicative of success. In fact, there’s a destructive force within the scope, time and cost goals: They come at the expense of others, like quality and customer satisfaction. If a deadline is important, quality suffers. We’ve all been there.

The cool thing about an agile project, is that we can gain confidence we’re on the right track, if customers were part of the process, and if the people developing the product were aligned with the customer’s feedback. The feedback tells us early on if the project is going to be successful, according to real life parameters.

And if we’re wrong, that’s good too. We can cut our losses and turn to another opportunity.

So agile is better, right?

Yes, I’m pro-agile.

No, I don’t think agile works every time.

I ask that you define your success goals for your product and market, not based on a methodology, but on what impact it will make.

Only the you can actually measure success.

Stairway to Heaven

Whenever we learn a new skill, and go deeper, it feels like we’re climbing a stairway to mastery.

A few years ago, I described how people get into unit testing, and their journey along the path, as walking up a staircase. I’ve seen many people go through this.

Our stairs are very slippery. In every step, we can slip and fall off the stairs. That’s why we get less and less people on the top stairs. Persistence and discipline keep us moving up.

Most people start by hearing about unit testing. They’ve  heard about it in a conference, or read about it in a blog, but never tried it before. Early adopters don’t stay on that step for long, because they are eager to try it. But most people stop there and wait until an opportunity comes up to try it. This can be a new project, or a new team that has the capability to learn. Then they jump in the pool.

When they get to the trial stage, lots of people fail. Specifically, it’s not that they fail, but they conclude that unit testing is not for them. It’s easy to find excuses for this – technology, time for learning, and the fact that it’s damn hard. So many slip off the stairs at this point. They will probably not start climbing back again, and will dissuade others from taking the trip.

Those that have succeeded and managed to see the value of unit testing, usually are crazy about it. They bump into another problem: Other people don’t understand them. It’s really hard to imagine going back to no tests, they think: “How do these people live? Why aren’t they using this simple but valuable tool”?

There’s an actual dissonance after you’ve seen the light, and it feels worse because it’s really hard to transfer this feeling to other people. When successful, unit testing becomes a team practice, sometimes even an organization practice.   Try to explain it to your friends in another organization and they will look at you funny.

The climbers (now we’re talking dwindling numbers) who actually get to learn and use TDD, get to it after they have some experience in test-after unit testing.  Usually you don’t get to start with TDD, and when you do, like it happened to me, it’s bound to fail. To assimilate this practice takes experience and discipline.

Plain TDD is hard. If you’re working with legacy code, you have a lot of work ahead of you. Sometimes, work you’re not willing to put in. It’s ok, but that’s where the cycle of failure begins again. With the same excuses: technology, time, and it’s hard. If you happened to succeed, urging others to join is bound to fail.

The few, the happy few, who manage to get on top, are in testing nirvana. Not only they control how they work, they influence others to work according to what they find as a best practice. Not only is it very hard to find someone like that, they aren’t really happy to leave their place.

What happens if one of this group moves to another organization, where the people don’t write tests? Even for a good reason to train them to start moving up their staircase?

Culture shock. It’s the same clash we saw before, but multiplied. Before, you’ve seen the light and others didn’t. Now you are holding the flashlight, and people still don’t get it.

What does it all mean?

A couple of things.

  • It is a journey. It’s long, hard and most of all, personal. Unit testing not just a set of tools, you improve and learn, and it’s hard to convey this progress to others.
  • We tend to work with people in our level, or at least at the same openness level. We struggle otherwise
  • Switching between teams causes an actual system shock. There’s a gap people need to reach over and they usually don’t.

It also means that an organizational decision of “everyone starts writing tests tomorrow” is bound to fail. Select the “ready” group first, those that are willing to try, and help them move forward. Plant the seed of awareness in other teams, so they will be ready. When they are, get mentors from the advanced (but not too advanced) teams. Grow everything slowly together.

It is a long tough process.

Get someone , like me, to help manage the process and mentor the people.

Image source: http://pixabay.com/en/dike-stone-stairway-stairs-sky-389587/

Everyday Unit Testing – New Chapter Released!

It’s the “Unit Testing Economics” chapter with an insight on how we actually make decisions in the unit testing adoption process. There’s also a bonus section on the Organization Strategy.

I’m planning the end the “free” period soon, so get it while you can, or else… pay money.

Read more about it the “Everyday Unit Testing” blog.

How To Waste Estimations

We like numbers because of their symbolic simplicity. In other words, we get them. They give us certainty, and therefore confidence.

Which sounds more trustworthy: “It will take 5 months” or “It will take between 4 to 6 months”?

The first one sounds more confident, and therefore we trust it more. After all, why don’t you give me a simple number, if you’re not really sure about what you’re doing?

Of course, if you knew where the numbers came from, you might not be that confident.

Simple and Wrong

In our story, the 5 month estimation was not the first one given. The team deliberated and considered, and decided that in order to deliver a working, tested feature, they would need 8 months.

“But our competitors can do it in 4 months.” shouted the manager.

“Well. maybe they can. Or maybe, they will do it in half the time, with half the quality” said the team.

“It doesn’t matter, because they will get the contract!” the manager got angrier.

The team had nothing to say. They left the room for half an hour. Then the team-lead came back and said very softly: “We’ll do it in 5 months”.


The project plan said 5 months. No one remembered the elaborate discussion that went down the toilet.

Estimates are requested for making decisions. In our story, the decision was to overrule them. There’s not really isn’t a way of knowing who was right. Many companies thrived by having a crappy v1 out, and then fixing things in v2. They might not have gotten to v2 if they had done v1 “properly”. Then again, it might be that they bled talent, because the developers were unhappy, or not willing to sacrifice quality. Who knows.

The point is: the effort put into estimation should be small enough to provide the numbers to management. If the team had reached the “8 months” answer after 1/2 day, rather than after dissecting the project dependencies up front for 2 weeks, they would have more time to work on the actual project.

By the way, they might not get the go ahead. And they would work 2 weeks more on other projects. That’s good too.

Don’t waste time getting to good enough estimates. Estimate and move on to real work.

PS: Based on a true story.

Image source:  http://artoftu.deviantart.com/art/Wasteland-93333090

The Code Kidnapper

Many managers and developers think about improving productivity. As with many things, productivity is in the eye of the beholder.

We used to measure productivity in lines of code. That led very quickly to believing that the developers produce what they type. Well, what do they do the rest of the time?

In fact, if they are producing code, we can do something better: We’ll get the smart developers, and call them architects. They cost a lot of money, but it’s worth it, because they can think, and we don’t need many of those. They can just put their thoughts on paper. Now we’ll take the not-so-smart ones and hand them what the architect has thought of, and they will do the work. And get this, we’ll pay them less, because they just need to type!

I just saved the company millions!

It may sound stupid, but you'll still find people believing that.

The main bottleneck, as well as the productivity engine of developers is their mind.

To increase our productivity, we’re looking to download what we’re thinking into the computer in a fluent way.

We call that flow. And we’d like to maintain it. When we’re in that zone, we can produce software very quickly. In fact, the wonderful productivity tools we have to automate the repeated operations we do, help us not get stuck once this engine is running.

Working with tests speeds things up because of the continuous feedback we get on our work. The more tests we have, we get better, quicker feedback, which keeps us on track.

If tests are the gas paddle, what are the brakes?

Enter the debugger.

This is the one of the most sophisticated tools there are, allowing developers to find problems. As technology moved forward, debuggers became more feature-heavy and precise, and gave us all we needed to isolate and find problems. It's almost like toolmakers say: “It's alright, you can make mistakes now, we'll help you solve them later”.

They hold our code hostage, and we’re giving in to these terrorists.

Because that “later” is costly. Debugging an application take a very big chunk of a developer's time. Without tests, it is the only way, to find where a bug occurred. Sure, you can print out logs, and go through them, but where’s the fun in that?

Instead, you need to wait, and repeat, and after you fix the problem, debug it again. And maybe do that again, and again, until you’re sure the problem was fixed.

Without small tests, we need to debug the entire system. With small tests, we need to debug a lot less. In fact, just by looking at the test code we might understand the scenario, and relate it to the code. It's like short-circuiting the process of "solving the problem".

Working with unit tests minimizes the time spent in the debugger. That's a huge saving in time, time we can direct to writing more features, and deliver them to the customer.

Plus, tearing up that ransom note feels pretty good.

Image source: http://en.wikipedia.org/wiki/Villain

The Hidden Cost Of Estimation

“Why would you want a rough estimate, when I can do a more precise one?”

And really, if we can do something better, why do it half way?

There’s a simple answer, but I’ll give it after the long detailed one.

Let’s start by asking again:

Why estimate at all?

There’s a whole #NoEstimates discussion, whether we need estimations or not. Unless your organization is mature enough to handle the truth, someone will want an estimation, believing he can do something with it: approve the task, delay it, budget for it, plan subsequent operations. That someone needs information to make decisions, and is basing them on the numbers we give.

In reality, unless there are orders of magnitude between expected results in estimation it wouldn’t matter. If we had a deadline for 6 months, and the estimation is 8 months, the project will probably be approved, knowing that we can remove  scope from it. If we estimated a project will take a year, there’s going to be a 3 month buffer between it and the next one, because “we know how it works”. Things usually go forward regardless of our estimation.

If however, we estimate we need 5 times the budget than what we thought we needed, this may cause the project to cancel.

In summary, the upfront estimation serves making decision. In fact, if you just go with the discussion, and leave the number out, you can reach the same decisions.

So why do we need the numbers?

Numbers are good proxies. They are simple, manageable, and we can draw wonderful graphs with them. The fact they are wrong, or can be right in very small number of cases is really irrelevant because we like numbers.

Still, someone high up asked for them, shouldn’t we give them the best answer we can?

Indeed we should. But we need to define what is the “best answer”, and how we get it.

How do we estimate?

How do we get to “it will take 3 months” answer?

We rely on past experience. We apply our experience, hopefully, or someone else’s experience to compare similar projects from our past to the ones we’re estimating. We may have even collected data so our estimates are not based on our bad memory.

Software changes all the time, so even past numbers should be modified. We don’t know how to factor in the things we don’t know how to do, or the “unknown unknowns” that will bite us, so we multiply it by a factor until a consensus is reached. We forget stuff, we assume stuff, but in the end we get to the “3 months” answer we can live with. Sometimes.

How about the part we do know about – we can estimate that one more precisely. We can break it down to design, technology, risk and estimate it “better”.

We can. But there’s a catch.

Suppose after we finished the project, we find that 30% of it, included the “unknown unknowns” stuff.  We could have estimated the other 70% very precisely, but the whole estimation would still be volatile.

(I’m being very conservative here, the “unknown unknowns” at the time of estimation is what makes most of a project).

The simple answer

So here is what we know:

  • Estimation is mostly wrong
  • People still want them
  • It takes time to estimate
  • Precise estimation costs more
  • Precise and rough estimation have the same statistical meaning because of unknowns.

That means that we need “good enough” estimates. These are the ones that cost less, and give a good enough, trusted basis for decision for the people who ask for it.

Fredkin’s Paradox talks about how the closer the options we need to decide between, it takes longer for us to decide, while the difference in impact of choosing between the two becomes negligible. Effective estimation recognizes the paradox, and tries the fight it: because the impact of variations in the estimates, there’s no need to further deliberate them.

If you get to the same quality of answer, you should go for the cheaper option. Precise estimates are costly, and you won’t get a benefit from making them more precise. In fact, as a product manager, I wouldn’t ask for precise estimates, because they cost me money and time not being spent on actual delivery.

Working software over comprehensive documentation, remember?


Image source: https://openclipart.org/detail/183214/archery-target-by-algotruneman-183214

Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More