We’d like to know the status of the whole product, in order to make decisions about the future. When we’re coming up to a checkpoint or review in the project, what is the information we need?
If the current status report we have contains just feature names with red/green/yellow (or blocked/on track/at risk), that’s nice, but definitely not enough.
What does yellow really mean? Is green really green? Even when other features are red?
If a feature is at risk, how do we mitigate it? Is there even a plan? Status reports do a great job on returning the ball to management to figure out a solution, rather than the team, who will eventually solve the problems.
Now, if we’ve been working “properly”, we’d know where we stand. When “Done” means tested, ready to be shipped, and has some automation around it, we’d know what we have.
But if we’re doing it wrong, we don’t have the full picture. “Done” may not be “done-done”. There may be bugs open, that we consider “part of the story”, but we haven’t gotten to them yet. We may not have automation, so there’s a risk we’re going to break the working bits.
So looking at the status we collect may not tell the whole story (excuse the pun). We need to ask more questions, and that means a little bit of digging. In addition, when we’re looking at the product level, with multiple teams, the story level may be too detailed, which makes it hard to understand how much of the feature we actually completed. We want to see the forest.
To create an informed picture, we asked the teams to report more than just the features. Here is what the teams report on.
For each feature (or epic):
- Which items (main flows or capabilities) are done (ready to be shipped).
- Which items are being tested.
- Which items have not started development.
- “Done” level for the entire feature – A crude estimate (like days/weeks/months) of how close we are, using test information, bugs, automation and gut feeling)
For each item that is still in development:
- “Done” level
- Closest point for testing – An estimate of how far we are from either a completion or at testing point.
So far we have a view of the status of the product in terms of scope, but not of execution. We want more information, and specifically, about dependencies and risks. We focus on deliverables the team needs, not provides (another team will report waiting on it). We’re looking at future dependencies, not ones that were fulfilled.
For each dependency we need:
- The deliverable
- Last acceptable delivery date
- Expected delivery date (if known)
- Whether the dependency is Internal (between the teams in the same group) or External (between this group and other groups)
Now to issues and risks we’ve identified during development.
For each issues and/or risk:
- The issue
- Current known impact
- Mitigation plan (if any)
- Date of no return (if not handled by then the feature will not be released)
Finally, we’re looking at the whole product release. What do we expect to deliver?
Since the teams know best what they can do, we want their forecast of what will happen if we continue, but also, look at alternatives to the current plan. We’d like at least two alternatives (even if everything is working according to plan).
For each alternative:
- Which features (and its items) can be delivered
- Which features (and its items) will be left out
Note that we assume a release-level quality for any released feature or item. A viable alternative does not include features without it.
Now that we see the whole picture, we can make informed decisions. And that’s basically it.
Not only at review time
Now, you may be asking: Why do we need to wait for a review to ask these questions? Can’t we make this information available all the time?
The answer is: You can, and you probably should.
The more information (updated, correct and at the right level of abstraction) available ahead of time, we don’t need to wait for checkpoints to make decisions. In fact the checkpoints may be too late.
However, if we’re not at that level of visibility yet, it’s a good place to start.