Burning Down The House–Agile Remix

Burn-down charts are awesome. They are part of agile’s set feedback loops. Take a look at a burn-down chart, and it will tell you if you’re on track or not.

Here’s a simple example:

Note that I haven’t used a unit on the Y axis. It can be story points or hours, or whatever unicorn unit you estimate by and track. The burn-down itself may not be linear, but if we follow that skillfully painted green trend line to the bottom right, we’ll see if we’re on track, ahead of schedule or behind schedule. It’s good feedback, that helps us decide to continue, pull in more work, or top the iteration and re-plan.

It doesn’t happen straight from the beginning but after a while, when the team knows what they’re doing, the graph should hit the bottom right estimated point consistently. That means that the team has learned its average capacity, and therefore, its average velocity. In average, past velocity is a good forecaster.

Now, what happens when the burn-down chart consistently looks like this:

We can see that the team is over-committing, because the trend line never reaches its target point. That drop to the right, is where the team throws stories off the side of the ship to save the sprint. Pushing the stories to the next sprint may cause that trend line to hit the target point, but everybody knows that’s putting lipstick on a pig. Right?

Let’s look at the bottom part. That’s the one with WASTE title.

Part of this work was actually done, but the stories were not completed. You can say that work was investment towards the next sprint, and sometimes you would be right. In many cases, though, work on this non-completed stories was done instead of helping other team members to complete other stories. Instead of completing one (maybe even two) out of three stories by swarming, the team didn’t get any completed.

That’s not the only waste though: Every story that was planned for the iteration was prepared and discussed, most of them by several people. Since the stories were not completed, it is destined to be discussed again (and maybe again, depending on the ability of the team to complete it next time). All these re-digested discussions are waste. That time can be better used to work on stories that can be completed.

The Chart Speaketh The Truth

Ok, that’s reality. The chart doesn’t lie.

Can we do something better?

What we can do is plan to our capacity: We can stop planning when we filled the “going to be completed” stories. If the chart  looks like this for every iteration, we’re wasting all time on that waste time. Just stop planning when you reached that capacity. Assuming we’re working according to priority, the top stories would complete, and there’s probably some additional work that’s going to be done too. Either that, or invest the slack in learning and improving, so the team can increase their capacity in the future.

A burn-down chart has value beyond projecting progress done on estimates. We just need to understand the story it’s trying to tell us.

And what about those estimates? Are they useful at all?

You know, that’s exactly what my talk “To Estimate or #NoEstimate” is about! And I’m giving it  at next week’s Agile Practitioners 2015 conference. If you haven’t registered yet, do so now!.

The New Agile – Size Matters

Coming up on the 15th year of agile, do we understand business better?

Remember that agile started in development teams? As the time passes, we feel that what the agile manifesto can be applied also at the product level, and maybe even at the portfolio level. There’s definitely a demand for scaling the process from the business side.

Let’s take a couple of examples on how we moved on. Since we’re talking scale, it makes sense to start small.

Do you know about A3? It may sound like a great 90s boy band, but in fact it’s a paper size. We use A3 as a canvas, like the one here from Lean.Org:

What’s so special about an A3 size canvas? Believe it or not, It’s really like agile iterations.

Iterations are artificial limits in time. There’s nothing real in there, except once we accept working with them, we suddenly have a deadline every two weeks. Our behavior changes because there’s a constraint.

A3’s size is another type of constraint. We can write very long documents, include a few presentations, and even some cool looking charts to explain what’s so great about the next year.

Or, we can use the constraint to filter all the buzz out, and create a succinct description that fits into the page’s cells. We can write those in, or we can fill the space with sticky notes. What matters is that we can’t overflow. If we do, we need to take something out.

Constraints such as this help us focus, and weed out the trash, leaving us with common basis and (hopefully) understanding.

A3’s can be used for anything, and the concept of introducing constraints can be applied anywhere (like, say backlogs. Or WIP). Here’s another example of an A3 sheet, this time for products, the product canvas by Roman Pichler:

This time the sheet is used for describing product information. And we can go on with ideas on how to use A3 for different purposes.

A3 is not the only old-new idea. Stay tuned.

The CHAOS Report and #NoEstimates

You know what’s the best way to start the year? Going over success rates of projects!

The Standish Group has been publishing the CHAOS report for more than twenty years now. I wasn’t surprised to find that they still use the same criteria for success and failure of projects. Here’s a summary of the CHAOS definition:

Resolution Type 1, or project success: The project is completed on-time and on-budget, with all features and functions as initially specified.
Resolution Type 2, or project challenged: The project is completed and operational but over-budget, over the time estimate, and offers fewer features and functions than originally specified.
Resolution Type 3, or project failed: The project is cancelled at some point during the development cycle or delivered but was not used.

As we can see, the iron triangle still rules. We’re still measuring success on how well we’ve estimated the initial cost and duration, based on the initial scope. As half of the projects in the study fall into the challenged resolution, and about 20 percent more failed, our industry is not doing that much better than before (although the success numbers are rising slightly and the number of failed projects are decreasing slightly since 2004).

How is that possible when everyone’s going agile? Wasn’t agile supposed to bring the productivity and effectiveness everyone was looking for? Wasn’t agile supposed to save all the doomed projects?

Well, first of all, no. That wasn’t, and still isn’t agile’s promise. Agility means the ability to react effectively to market needs.

What is success anyway?

If that’s how we describe agility, shouldn’t we consider project’s success based on the same market needs?

A project success means that customers use the product, and eventually money gets into the business. If that is the case, an agile project (whatever that is) changed in content, duration and costs to fit the market needs in order to be successful. If we dropped the excess features, and release the product with what the customer actually needed, does that mean we’re not successful?

Or maybe it took a lot more time to until there was an actual product/market fit. We’ve run over budget and over the time estimate. But eventually our start-up was bought by MegaCorp with a lot of fanfare and bucks? Is this a challenged project?

Finally, we’re still setting the success goals in the beginning of the project. When we know less about it, we didn’t identify all the known unknowns yet (not to mention the unknown unknowns). We put a pin on an empty map, and we get high marks for reaching it for moderate economic success, while the big economic success could be miles away.

We start on the wrong foot, and we’re motivated to do so.

Which leads to me to estimates, because that’s what we do when we don’t have all that information. We estimate time and cost, and then we make decisions. A go/no go for the project. Delay that feature because it will take too much time to develop (or will it? It’s still a guess).

At Agile Practitioners 2015, I’m going to give the talk “To Estimate Or #NoEstimates”. Are estimates the only way to drive decisions? Are there other alternatives? And are we destined to continue being graded on the quality of our guesses?

You don’t want to miss that one.

If you haven’t yet, register.

See you in two weeks!

Testing Economics Article on StickyMinds

My new article “Testing Economics” is now on StickyMinds.  It’s based on my Testing Economics presentation.

Now if you’ve missed it at NDC London, there’s still hope!

I’m going to give the session at the Clean Code Alliance group on Feb 5th.

Go ahead and register here.

(I’ll possibly give this talk in other venues, stay tuned).

Agile Practitioners 2015 Is Just 3 Weeks Away!

Top 50 Agile Conferences headline banner

And I’m going to be there, with my To Estimate Or #NoEstimate talk. And co-facilitate the Advanced Agile Programming workshop with Lior Friedman.

Did you know that the last APIL is one of the 50 best Agile conferences in the world? Check out the Vasco’s Duerte (who is also #NoEstimate related) Agile conference compilation.

We’re going to be there next time, and even at a higher ranking.

You don’t want to miss it.

Register now!

Regression

When we think of regression, we think of bugs. That’s the first thing that pops into our minds. As with many other things, there’s a deeper meaning, if we just look closer.

We have two kinds of usage for the word regression. We use it to describe a test suite that we run at the end of specific functional testing, to make sure everything is still working. We also use the term, technically, to describe an event when a test (either automated or manual) broke, when before it has passed. The first usage comes from the second; because we broke so much that had worked before, we needed to build a whole suite of tests to make sure nothing slipped by before the next release.

We use the term “regression” to describe the system quality that has regressed. In short, something that worked before, now doesn’t. We have changed course, and instead of making progress, we are now regressing.

If we want to move forward, or in any direction really, we don’t want to go back. Right?

We don’t, when it comes to quality.

We do, when the change is hard.

Process regression

So many times, we forget that regression is not just about bugs. When we’re making progress, with new processes or habits, every time we stumble, or hit a wall, we start thinking: It wasn’t that bad before. Sure, we had a lot of X problems (where X was the problem of the past, like people leaving because of command-and-control), but at least we didn’t need to deal with Y (where Y is the current challenge we’re facing, like losing control to a self-organizing team).

When we do hit that wall, it’s very easy for us to regress. It makes sense, too. To make progress, we needed to get out of our comfort zone. When the going gets tough, it’s easy to get back there, because we feel comfortable there. It’s natural. It’s easier then climbing that wall.

But here’s the thing: Before you start to regress, stop for a minute. Think of why you’ve started this journey in the first place. When you did, you wanted the prize at the end.  That prize (better quality, better results, less waste, any improvement you’re chasing) is still there.

“But it’s a long, hard road!”

Yes, it is.

To make progress, you need to fight regression.

Making progress is no simple matter.

The New Agile–More, Please!

The current buzz at the agile world is scale. Now that we know that agile looks golden, we want to apply it to everything.

Agile started as a development team practice. Extreme programming, Scrum, Feature Driven Development, and others all originated in software development teams. Since they were successful, it made sense to apply those successes to other teams as well.

XP, like most early methodologies didn’t have that concept(and frankly, didn’t try to answer that too). If you wanted more capabilities, you had to have another team.  The first that tried to at least start to answer that was, of course, scrum. Like many things scrum did well, it tried to answer a business need, including the question about how to handle big projects. Scrum suggested the “scrum of scrums” idea, that didn’t happen to stick, but at least it had something.

As (few) years went by, another question was starting to come up: Not only do we need to manage multiple teams, but some of them are not co-located! The question did not just appear out of the blue. We had dispersed teams before, and they collaborated on email and phone. But now, in the new millennium, the internet helped with that collaboration. There were video conferencing tools, and skype, and cell phones. We had the technology! Surely, it can help in solving team problems!

Don’t call me Shirley

There were (and still are) mixed answers. Some teams work together very well on different continents. Some teams don’t work very well, placed on different floors in the same building. We find time and again that tools can be great enablers, but  individuals and interactions come first.

The current view is that things can be worked out to a certain extent, and besides, moving people around the globe is not possible, even if those agile guys say it’s better for the business. So distributed teams are no longer a problem to be solved, but situation we should learn to make the best of.

The next scaling step was how to execute big product decisions, and make sure execution is in alignment with the vision. Out of all the buzzwords, alignment is the one that matters more. It’s easy (sort of) to maintain alignment in a team of 8 people. How can it be done in a team of 200? How can we maintain an execution velocity over multiple team, and still have them all move in the right direction?

Note that this is not the first time the question is asked in the business world. We just expect an “agile” answer now.

The truth is we are still learning. Whether it’s SAFe (as it is now, or a future version), or something else, if any - we don’t know yet. SAFe has some good suggestions, but in fact is still too young to actually check long term effects on big organizations. We’ll see.

But wait, there’s more!

We always want more of a good thing. Feedback, for instance. More, quicker feedback is better.

If before we had continuous integration, that gave us quick feedback on quality, now technology brings us continuous delivery. Now we can deliver to production in a push of a button, (or an automatic trigger) and get the feedback directly from the customer. If we do it right, that is. With current technology and tools, we can deliver and deploy, as well as revert and fix problems. Having the tools is one thing, we have learned, we still need to be able to use them correctly.

If we’re talking about feedback, we should mention Test Driven Development. TDD gives us feedback that our code is working, and using test-first we specify how its interface should be called. But that was at the function level.  To get more, we got Acceptance Test Driven Development. ATDD gives us feedback that our code is now working for the customer, and by doing it test-first, we get that alignment we sought earlier: instead of developing things that the customer doesn’t need, ATDD shows us the way, and we develop that. As TDD gave us less YAGNI to worry about, ATDD scales that YAGNI.

Again, if we do it correctly.

Over the last 15 years, agile did not stand still. It moved forward and sideways, tried a few things and experimented a lot. Mixing the available technologies was part of agile growing up.  In the next chapters, we’ll dive into specific areas and see what kind of progress we made.

Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More