My Favorite Podcasts - Manager Tools

I listen to podcasts on my commute which is currently ~1.5 hrs a day. In addition, another 3 hours during workout. A small calculation I did a few weeks ago - In the year and a half I've been doing this, I learned the equivalent of a very stacked semester! That's a lot.

First and foremost is not a tech podcast: Manager Tools. My guiding light, no less. See, my management skills were developed on the job. Coming from a technical background makes things worse, since us geeks are infamous for lacking those very important people skills.

So it really depends on your teachers. Mark Horstman and Mike Auzenne make management (look) easy, with their experience, humor and the step-by-step approach to different managerial tasks. And unlike other courses, presentations and managerial aids, it all makes sense, AND it doesn't break. Things I heard a year ago, don't conflict with thing I hear now. This consistency makes me trust them even more. Like Mark says: it all falls into place.

I'm not sure how I came upon this podcast. I do remember the first episode I listened to was on presentations. The first hint was that I was on to something was that I came out of this thinking: I actually don't know anything about this. (Of course I thought I did 45 minutes earlier). Slipping down the slope was easy from there.

I'm not currently in a managerial position, but a lot of stuff sinks in, and I look at things differently. What really helped me was the discussion on communication styles. Amazing stuff.

I encourage everyone, new or experienced managers, non-managers, everyone who understands its all about people (Mark's first law, and boy is it right) - sign up, and register (free). In addition to the weekly podcast, there's an additional monthly podcast to registered listeners.

After you start listening to this, like many listeners you'll ask - are they crazy giving this stuff for free? Thanks M&M.

Hidden Costs

I'm finally done with extracting bug information. All projects show a runaway bug train trend. 2 of the teams have ~250 bugs accumulated over the last year, the 3rd has 90. These usually end up as an extra cost to the customer and us, moving on with subsequent releases. And there's the cost of find-fix-retest of the bugs we do solve.

We haven't done much data collection over the time spent of all the "additional tasks" that are not planned in advance. If there's a task of a few days or more, it goes into the plan. But I'm talking about those smaller tasks that accumulate over time, but we don't know how much.

Now we do. Since starting working with Project Server we instructed the teams to log all hours spent on unplanned tasks, as long as it takes more than half a day. (less than that seems bureaucratic, and the chance of the people going along with that is very low). This does not count tasks that were planned but got longer, just those that were added.

So the numbers are not that accurate, but I can say that if we compare actual development that was planned with the unplanned tasks, we get 5:1. For every 5 days of development we planned, there was an additional 1 day of unplanned tasks.

Obviously, these tasks were always there in some form. But if you don't measure, you don't know the effect it has. What we can do is plan better for next time with this information.

Why Management Support for Change Matters

We had a strategic meeting today, where I was asked to present how we can sell our software process expertise to clients within the company. This means that we would consult how to build a software development process in one of the departments in the global company.

I presented the basic agile process as a "Vision", e.g. something to aspire for in the future, and start taking steps now in that direction. But I did point to things that hurt as today, as a position to start from. In the words of our GM - I used the opportunity to for my goals.

It's a two edged sword: If we promise the "right" process (read: agile), but we are not practicing that process, then we would be actually suggesting something we don't do - that's bad.
On the other hand, suggesting what we do today, in which we don't have spectacular success is not a good idea either. What's the choice? Something in the middle. We would do some changes in our existing process, and offer the mid-term process as an example.

The project managers of course went into defense mode- the "Change" reflex. In any other event that would probably mean (hey, and we're still a long way from making changes) passive-aggressive non-reception.

But lucky for me I had the GM's support. She was in a management course lately, and took part of a simulation of co-located team work. She told us the simulation proved that teams consisted of different skills works effectively better than specifically skilled teams. Shows you that training for managers is not that today. It also helped that I was ready with some hard data for munition.

My point?

It's hard convincing people to change as it is. But without management support, you really need luck.

This Reminds Me of Me

Scott Hanselman, who has the excellent technology podcast HanselMinutes, wrote about meeting his idols, and how it sounds to an innocent bystander, in this case, his wife.

I remember having this feeling coming off TechEd awhile back, after seeing Don Box on stage (in the great immediate post-COM days). It was easy to share the excitement with those happy few, but I bet it seemed strange to my wife. I bet it still does.

And the joke at the end cracked me up. Also works for those happy(?) few.

Tools of the trade

Oren writes about what's needed to develop a real application today, in regards to using just the CLR.

Although my personal development are very small projects, I can't live without TestDriven.Net or Reflector. I haven't used IoC, but I expect this is something I'm going to use from now on.

There is something to be said about developing in a big company. As opposed to to small group that can do whatever they want and keep the cost at a minimum, in bigger team there's a bigger marginal cost of all the tools out there. If everyone uses his favorite tool, other people need to also know it to support it.

There should be some kind of a balance there for the optimal point.

Runaway Bug Train

Continuing with the metric fest, we calculated that throughout the last year, one of the projects accumulated more than 250 open bugs. That means that after removing the fixed new bugs, and fixed all bugs from the count, the team is far behind in catching up the runaway train.

What's the solution?

There's not one solution, but a complete mindset change, which I'm trying to push. But one of the tools we should use is test automation. Currently, all tests are manual, and are costly. There's also almost no use of automated unit-tests.

So we're missing a couple of things. One - we're not even starting to build the safety net for tomorrow. With every day, new code is going into the pile, which makes the system more complex and harder to test, and so more bugs are creeping in. Automated tests can provide the safety net, if people used it.

The other side of the coin is the cost of manual tests. Working with instruments, there is no way we can replace all manual tests with automated. But look at it this way. Sanity tests (or smoke tests, choose your definition) are supposed to be run on each build. What happens when they take 2 days to run? That's right, they are run less. So one of the other safety net tool for risk reduction is not used effectively.

Automation can at least provide the ability to provide the safety net, and have the test team concentrate on the bigger issues. The cost? A mindset change. And diverting precious developer time to test automation, which of course clashes with the "we've got to ship it by X" mindset.

Change is hard. I'm still confident though.

How do they remember your software?

One of the things I did as part of the metric collection was look at the number of open bugs in the first release of a version (which should be the Release Candidate, but unfortunately, lately it isn't) and the final release of a version.

What causes extra releases?
  1. Bugs.
  2. Last minute changes from the customer.
  3. Bugs introduced while you are adding last minute changes from the customer.
  4. Other stuff (mostly relating to the top 3)
So it turns out in two cases - one with 3 extra "final" releases, and the other with 5, the last release looks much worse than the first one. The one with the 5 had 50 bugs more.

Why is that?

Mostly, there's the fact that after you (finally) released, you would stop testing. And then there wouldn't be any more bugs. But if there are additional releases, testing continues, and more bugs are found. Since "we need to ship this release already", only the important ones are fixed (and if bugs were the cause for an additional release, this would be those that triggered it). The rest of the bugs go into the open bug list in the release notes.

So let's say finally the customer got what he wanted. What would they say about version X?
  • I wasn't happy with the first release
  • It took 5 more releases to get where I wanted to be in the first place (and more money too...)
  • And the last one was even more buggy than the first one.
I haven't checked Wikipedia, but I think this comes close to customer dissatisfaction.

The Correct Thing To Do... And Costs of Change

I really like working with smart people. Especially, when I hired them. And that means I like being right. Moving on...

Smart people identify bad things and propose changes. They sometimes stubborn to a fault doing that. But that's OK. The stubborn ones are willing to fight the system, or at least try to prove the system wrong, since they like being right too.

The story is not about an argument between to "right" people, but rather we agreed that about the right thing to do. The subject is how to 2-3 developers write code and integrate. The right way is to do so together, and resolve conflicting issues as soon as they come up. In ClearCase this means all are working on the same stream with dynamic views, and when code is checked in, everyone gets the updated bits immediately. Working with a single solution means that the next compilation will make you aware of the conflict.

The cost I refer to is what I'll have to answer to the question: "Why isn't everyone working like that? This is not the way we develop". And to remind you, I am actually in charge of the "way we develop", and I can testify, that's not written in the methodology.

If I had my way, everyone would be working like this. But in order to do that successfully in the larger teams a few things need to happen
  • In larger teams, there is no single solution. So the feedback cycle should be closed by an automated build and continuous integration. This is missing in the larger projects (that need it more than the smaller...)
  • Working in the ClearCase methodology, everyone has he's own environment to not interfere with others. (Yes, it is the complete opposite of common sense). Now, when people are working separately, they actually check in their code just right before the integration (which could take days to weeks). So this has to change to.
  • Finally there is the "less-than-affectionate" relation to ClearCase, because of the painful implementation over the last years. Although the developers are working with it, some are still allergic, and need some "motivation injection" to work even more with the tool.
So like all evolutions, we'll start with a single team, step by step, take the hits, try to convince, and some other cliches I can't think about right now. But I'm working with smart people, so there's a chance for success.

The Remote Safety Net Works to a Limit

I continued updating the Excel template for supporting the metric collection since last time. What I though was a bug was not really a bug. I added a few test cases, but the tests kept passing without code change. Imagine the disappointment...

I still had a feeling there's something wrong. That I was not covering everything. I changed the system under test code to make more sense. And the test are still passing.

Then I went back to Excel, to "paste-special" the formulas. This means writing the functions in Excel formula code. From today's experience, this is very error prone. I think I'm finished, but I'll need some kind of a code review. This would be me and my boss going over each formula to make sure its correct. It's so easy to make a mistake.

And I've grown addicted to the immediate feedback loop (aka automated tests) it feels like coding in Notepad - you hope it will compile, but you can't really prove that.

TDD and the Remote Safety Net

We're working on collecting metrics for the upcoming Management Review, and I'm currently collecting bug information of the different project releases.

The data is exported from the database to Excel, and there I do most of the math. At some point, I was sure I was making mistakes, because the formulas became complicated, and if you know Excel formula's they are not that readable.

So I said, that would be a great time to test the Excel template for VS2005. Which is actually nice, but when errors occur, the information is not that clear. Anyway, after experiencing some Excel integration, I coded the different calculations TDD style on a helper object.

After completing most of the calculations, we looked at the time - the remaining coding is Excel integration, which can be a bit painful. I decided to take a second chance with Excel formulas, only to break them down to manageable pieces (=Columns).

Which works, but with 7000 rows, its not testable - I can only check a few rows, and if it looks fine - that's it. But I have a remote safety net. I translated all each C# function to a formula. After reviewing that the simple formulas are complete translation of the C# code I'm more relaxed. I have tests on the C# side to prove it.

And then I found a bug. A case I didn't test for in C#. I did find through Excel, as one of the rows gave a strange result. This is lucky, because like I said, I can only check a few results. So I've added a test case in C#, and once the code is ready, I'll translate that to Excel.

So conclusions from today:
  • Tests should be named correctly and readable, it helped me a lot. Even 2 days later, I already forgot what I was thinking.
  • Factor the code into small functions - it's easy to translate...
  • Safety nets come in all sort of shapes.
  • TDD doesn't catch bugs if you didn't think of a test for them.
Eventually, I want to make it as automatic as possible, so I think the Excel VS is the way to go.

Finally moving to FireFox

It actually happened today and I'm sold.

Due to installing the monthly Microsoft patches, IE decided to close whenever I opened it. A few hours later I isolated which patch actually caused it, and removed it and IE came back to life.

But it was too late.

In the mean time, I switched to Firefox, installed a few extensions (especially the cool CoolIris) and GreaseMonkey. I also went to LifeHacker for some info on GreaseMonkey scripts and installed a few.

I'm staying here.

Knowledge Storage

So where do you put all the knowledge you collect?

The easiest place is documents. The problem with those, is that the more knowledge you document, the less likely people are to read them.

Today (another) argument I was involved in was regarding the use of dynamic/snapshot views in ClearCase. The methodology we have (due to change if I have a chance), is working with dynamic views. This document was written 3 years ago. Since then we learned that .Net doesn't like it this way. So in the last year, the .Net developers started using snapshots.

Where is this information stored? currently in my head, and I pass it to developers that start using ClearCase with .Net. The thinking is that the more people know about, the knowledge will be "stored" as a group knowldge.

The argument was "Then why isn't that documented?". Opposite to the dark-side Gil, I contended that it would be a waste of time to document something no one actually reads. And then the fighters left the ring, each to his own corner.

If knowledge is never accessed, how do you know if it is really there?

Joining the dark side

No, it's not about Microsoft.

Today I found myself explaining arguments that are "organizational" and "administrative" although with some technical merit. The discussion was with one of the developers regarding piloting V7 of ClearCase, rather than using the 6.15 we have installed. (For the compassionate readers, leave condolences in the comments section).

I'm not the most tech-savvy regarding IBM tools, but in the last year I learned some stuff, and basically, it's brittle. So although leveling up should be a breeze, the company wouldn't allow the pilot, with the fear of risking other developers who work on the server.

At the past I was that developer. Laugh at the face of danger, let me do the pilot, it won't hurt anyone. But now I'm the nay-sayer, guarding the organization, rather than allowing innovation and risk-taking.

So have I grown up, or just taken the blue pill?

Improve your development process starting tomorrow

Roy wrote about tips for a successful team lead. These are things you can start implementing tomorrow morning.

I think the Demo is great to create a constraint for the team. James Shore talked about that in his pre-published agile book.

I would also add time-boxing with short iterations. When the time to release is 6 months, the first 2 months everyone still relax from the last release overtime, and the last 2 months are full of integration pains an more overtime. Rinse, repeat.
Short iterations require you to focus and not wait (known as the Student Syndrome).

One thing I wouldn't do as the FIRST thing (today) is put the overtime limit. Although this will work, it needs the political climate to gain approval from higher ranks. You can do this after you have gained credibility and showed results through other methods. After that you'll be able to do whatever you want.

Metrics

We're currently starting to work towards the annual management review. Most of the time I've been here at Bio-Rad, I used to be at the other end. It goes like this:

  1. QA shows different metrics and measurements collected recently, relating to the whole year.
  2. Project manager looks surprised. (I've practiced, so I know).
  3. Investigation begins. Why was this late? How come we decided to do that?
  4. 2 hours later, go to the next project.
  5. After a whole day, decide on continuation.

Last year we tried it a bit different - the QA manager discussed the finding with the PMs prior to the meeting. That didn't help.

In addition, the conversation was side tracked to less important issues. For instance, based on the big tail of bugs one of the project was accumulating, last year we decided to give a direction to the PMs: Close 20% of the old bugs. Which he did. Around 190 old bugs were fixed and closed.

But...

The release went out with 700 new bugs. And with the big numbers theory (unfortunately they are big numbers), I can prove that we could have spent the same amount of time to close the same amount of new bugs. I think the customer would be happier.

As I said in my initial post there's a constant struggle with the PMs regarding how to change their ways, since there's always no time, their understaffed, etc. So no improvement is made, and it gets worse, and so on.

This year I want to try something different that wasn't done before. We'll do the regular show about collected metrics, this time trying to prepare more before the review. But I want to try and predict the performance (delays in delivery, quality) on the upcoming releases.

And I hope that by showing where the current practices are leading, they have to make changes.

We'll see how it goes in the end of the month.

DLR and Simplicity

The managed run time for dynamic languages looks cool. I've been reading on Ruby lately and the simplicity can take you a long way. However, this could be a curse too. I remember I was in TechEd Europe a few years back, and went to a talk regarding Lotus Notes and Exchange integration. Yes, it was very exciting for the 5 people there...

Anyway, the lecturer said something I can't forget until today (and in the last few months, I did some Notes programming, and I can attest to his words). He said that the biggest advantage of Notes is also its enemy. It is very easy to build stuff which works immediately. But since it's so easy, no one maintains the code, you wind up with similar implementations, and it goes down hill from there.

For me simplicity is key for maintenance. I've architected and built very ingenious stuff, but it was so generic and open, it then had to be modified and debugged to fit specific implementations, and the results were not pretty.

I shudder when I think about multiple scripting languages making up an application that's easy to build. Sure - you can do it, but should you?

So assuming the DLR and the tools make it easy to build great working applications, does that mean they will be also maintainable? We'll see.

Silverlight and Content War

The news from MIX keep coming, and it's looking very interesting. Scoble says "Microsoft rebooted the Web".

On one hand, until a few weeks ago I was sure Silverlight was another attempt by Microsoft to take over another turf, with not too promising odds. Going after Adobe seems futile.

But now, Silverlight starts to look like a platform. For me, a platform starts with a set of good tools working together to build on it. With the Expressions tools, VS for building and debugging, the DLR and now the ability to run everywhere and anything (Read: Windows 2000 and up, Mac, but Linux is still missing) the odds are looking good. One question I have is how affordable the tools will be for amateurs, as this will make the penetration easier.

So the tools are there, the demos look great and the marketing machine starts rolling. The small runtime raises the odds of adaption. My bet is Microsoft is going to win it - not at first, but slowly and gradually up to a tipping point. The open source crowd (and anti MS zealots) will go with Flex, but the big business will go with the easy to roll out.

Focus on the Important Stuff

Everyone agrees that code review is good for you. We are tracking both how many code reviews were done, and how many bugs we've uncovered during the sessions.

We've been at it since Dec-06, and in April I did a summary to see where we are. The more reviews we do, the more bugs are discovered – no surprise there. The summary makes a compelling argument on why there should be more code reviews. In March, for instance, out of 17 sessions, we found 13 bugs. That's a good percentage, I think. So that would be a great way to convey the message: Do more, it helps.

And then something comes up. It seems that a refactoring we did in one of the bug reviews, introduced a bug. (This would be a good place to point this is VB6 code – so refactoring is done by hand, and no automated tests to run).

So the discussion now is what to do in order to prevent this. At the situation, the programmer sat with 2 people (me included), and made the changes with us. Since both of the reviewers do not the code (which is one of the goals of code reviews), we couldn't identify the problem. Actually, code reviews are the way to prevent this. With more code reviews, teammates can identify this. But I digress.

I'm willing to accept one bug introduction over 30 found. I think it's worth the risk. And it's not like changing code in a code review is different than changing code at other times. The risk is still there. However, the discussion continues.

I find it irritating when people don't focus on the important stuff, and rather enjoy a lively debate on side effects.

Hello world!

This would be the first entry, so this is where I have to define where I'm going with this blog. Not that I really know yet.

I'm currently in an interesting position. After 13 years of doing software development, architecture and managing software projects, I wound up in a place where I should be able to influence the processes (read: QA). However, there's a constant battle with the software project managers, who I'm pestering with my little set of annoyances – do that, document this, did you actually test it before you shipped?

Therefore, I will concentrate on software development practices. I will also touch on some workplace politics, or "the dark side".

Most of my experience is on Microsoft environments and tools, so my posts will be in that context. I'm trying to be open to the other small planets around the sun (Google, a newly discovered shiny one for instance).

I also see this blog as an opportunity to enhance my writing skills, so I would like to see some improvement over time.

Oh yes, and if I wind up on the top 100 blogger list, that's not bad either.

So here's to a long and hopefully interesting ride!
Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More