Metrics

We’re currently starting to work towards the annual management review. Most of the time I’ve been here at Bio-Rad, I used to be at the other end. It goes like this:

  1. QA shows different metrics and measurements collected recently, relating to the whole year.
  2. Project manager looks surprised. (I’ve practiced, so I know).
  3. Investigation begins. Why was this late? How come we decided to do that?
  4. 2 hours later, go to the next project.
  5. After a whole day, decide on continuation.

Last year we tried it a bit different – the QA manager discussed the finding with the PMs prior to the meeting. That didn’t help.

In addition, the conversation was side tracked to less important issues. For instance, based on the big tail of bugs one of the project was accumulating, last year we decided to give a direction to the PMs: Close 20% of the old bugs. Which he did. Around 190 old bugs were fixed and closed.

But…

The release went out with 700 new bugs. And with the big numbers theory (unfortunately they are big numbers), I can prove that we could have spent the same amount of time to close the same amount of new bugs. I think the customer would be happier.

As I said in my initial post there’s a constant struggle with the PMs regarding how to change their ways, since there’s always no time, their understaffed, etc. So no improvement is made, and it gets worse, and so on.

This year I want to try something different that wasn’t done before. We’ll do the regular show about collected metrics, this time trying to prepare more before the review. But I want to try and predict the performance (delays in delivery, quality) on the upcoming releases.

And I hope that by showing where the current practices are leading, they have to make changes.

We’ll see how it goes in the end of the month.

Leave A Reply

Your email address will not be published. Required fields are marked *