Jeremy Miller on Designing for Testability

All I have to say is Amen to this.

TDD maybe new-ish, but software engineering practices aren't. We just need the discipline.

And do yourself a favor. Read Jeremy's blog regularly. And memorize everything.




Powered by ScribeFire.

Is software engineering really "engineering"?

This is cool.

In one corner it's Steve McConnell, author of Code Complete and a regular god in the software field. In the second it's Eric Wise, one of the good bloggers on CodeBetter.

Steve says absolutely Yes. Eric said No. Then little-ol'-me link Eric's post on Steve's comments and presto - a conversation erupts. The topic is a bit old, but it's the different opinions that make it interesting.

I love the internet.

Scott Bellware on Dependency Patterns

Scott is very smart, and may be sometimes very expressive (read: extreme) in his views. His blog is on CodeBetter, where other great bloggers are abound.

He has a posts on dependency patterns. A primer. Very readable, and accessible to a newbie. It's a good read.

Read it here.

Relying on Others

Check out my collection of ClearCase posts and feel my pain.

Continuing our latest adventures with ClearCase, (I haven't worked on anything else since Wednesday) today we reviewed the situation. We had to roll back the changes made on Thursday, due to side-effects that appeared. (Basically square one). Currently there's no real solution in sight, but this is the least of our problems.

The problem is the confidence we're losing with the contractor with each such side effect. Today we even mentioned the unmentionable: drop ClearCase and look somewhere else. (Relax, I don't see us dropping it too soon). I'm not sure that it's only the tool, but the caring it needs is of a level I never knew one needed.

So we huffed and puffed, and the contractors apologized, and sent someone (at the end of the day), but they are currently stumped. They sent the info we collected to Rational Holland for continuation, and will try to fix what they can. But...

I'm not sure that when they tell us their done, I'll believe it.

This task is not supposed to be in-house knowledge. As a client, we see them responsible to see these things through completely. We rely on them to make the right steps and test the solution completely. And "completely" would be by their standard, since we don't know what "completely" means.

Plus, we also pay them a lot. I though I'd mention that too.

And to add to that, we talked to IBM today, maybe they have an idea. The guy from IBM, while not going into details, suggested a few solutions. One of them contradicted in some way the currently implemented solution. Do they know how to make their clients happy or what?

So we're having a session tomorrow with both teams to wrestle it out while we watch and cry. Not sure about the outcome. I know there's a way out of this. I'm sure it's not instantaneous, I know it's not cheap, and I cannot see how we keep our ClearCase prowess reputation high following this.

And I'm in the middle.

One day into the new role

So a minute after I'm tossed into the ClearCase mix here's what happens: We are opening a channel to our partners in the company's headquarters to work on the server. ClearCase being what it is, it looked like a small task but grew into a large one.

Most of the chages were done by TesCom, our implementation company. During this implementation, they uncovered a problem that had to be solved in Holland, in the European Rational support, since it uncovered a remotely faced problem. Although that's not important for the story, it represents another bump on the ClearCase highway.

So last Sunday, our IT manager thought this would be completed by Tuesday, when the implementor would be on site. He then communicated this our GM, which in turn communicated to her boss in headquarters. Note that this would end a few months discussion, so everybody would breath easier when it was completed. We also had some reputation to uphold, having ClearCase on site for 3 years.

However, the implementor was not on-site Tuesday. So the IT manager went to work, with his limited knowledge, and tested all he could on a test server. Tuesday evening testing was completed, and Wednesday we'd have lift off.

I'm thrown into this on Wednesday, by our GM. So I'm starting to get to know the issue. Our IT manager (who is very smart by the way) did a couple of tests on the production environment to see if it actually works. But then he discovers something he missed on Tuesday evening. Lucky for us. But this is Wednesday evening, and we currently know that we can't launch today.

[Note: The IT manager actually discovered the problem, which was the wrong order of scripts. Chalk up 1 for us, 0 for the implementation firm]

Thursday morning, the implementor is in, and she confirms the discovery. It takes until the evening, since we have a few "unforeseeable" ClearCase problems, but the two of them complete the mission.

[Note 2: As you can see from the story, I wasn't really a player here, just a supervisor]

So what did we learn?

  1. We haven't done much testing by ourselves, and in hindsight, we were not actually able to complete it all by ourselves, to the level we expected. Actually, we couldn't really define the test suite we need to run there. On Thursday we relied on the implementor's skills and experience.
  2. We failed with our implementors once again. I know anyone can replace a couple of lines, but it's another small failure in the line.
  3. This would be listed as another capability we don't have on site. We are limiting ourselves with the in-house knowledge, and we'll have to rely on contractors to do similar things.
  4. I really missed the action of producing something, even when I'm not part of it.

And in the mean time, some more responsibilities

I'm not that crazy about this, but I'll play along for awhile.

Our flop of the ClearCase implementation has left us with an addiction for the implementation company. In the past I've reduced the dependency by transferring knowledge to IT. This worked and now we want to get rid of the on site support completely.

This requires more transfer of knowledge. And I would be some kind of coordinator. Technical coordinator. Part of this is to actually define the different types of knowledge that should be in which part of the organization. This would also require me to add to my technical skills in ClearCase.

Why don't I like it?

I think the dev team lacks the required knowledge for a tool they use everyday. Up until now, with the implementation company representative on site, it was easy to have their problem solved for them. We're trying to alleviate this, but we're sending the message that it's now QA's responsibility. So although I would try to get each team on track towards applicable knowledge, it's still the wrong message.

This is a dev tool, and it is the dev's responsibility.

Management Review - Take 2

So now we're after the second (and final) part of the management review.

The second meeting was different than the first one. On the first one, the PMs were active (in a defensive way, mind you). This time they were more quiet.

I know the feeling from past meetings like this: I want out, the sooner the better. I'm sure at least one of the PMs must have thought: Gee, I can raise an issue or two to discuss, but then it's gonna cost me. Since the solution of problems requires effort, usually by the PM or team members, this will probably come instead of "real work".

So most of the discussion were between the QA team, the general manager and the R&D manager. And here's another thing: The PMs are the direct reports of the R&D manager. So they would probably would like to be aligned with their boss' views.

We discussed this situation with the GM today, and she suggested we should do a separate meetings with the PMs, no bosses. They would probably be more open for discussion. It's a possibility. I suggested doing mini-reviews quarterly, which can bring more "engagement" as things would be more relevant, and may raise the flags before things happen, not retroactively.

There is some drift between the QA and R&D. I think some of it is the legacy perception I used to have when I was a PM: QA distracts me from my real job. I still think that in order to move things, we need the collaboration of the PMs in order to make changes, if this message will not come from above, it would not be effective.

Open Non-Reproducible Bugs - Should we set a goal?

This number surprised me. In all 3 projects (every project has a different size, complexity and team) there's a 75% rate of open non-reproducible bugs.

What does this mean?

I know we're not doing enough to reproduce bugs. I take that as a given. I also think that since it's more easy to fix reproducible bugs, the focus is more on them.

But what we're seeing is that only 1 of 4 non-reproducible bugs actually gets fixed. So the accumulating numbers of open bugs includes the other 3.

Unless it actually gets reproduced at the customer's(apparently, the ones that do are not high-severity, and not that traumatic) they get "stale" and not discussed regularly.

So in reality, we do treat non-reproducible bugs differently. Should we target to lower them?
Let's say we set a goal: no more than 25% open reproducible. What can happen?

  • Tester will not report them. (gaming the system).
  • Testing effort will concentrate on reproduction, instead on other facets of the application. I expect at this point, the PMs will not like that, and eventually will revert to the current status
My favorite option goes back to the developers. If they produce less buggy code, testers will have to concentrate more on those bugs, since there will not be much other stuff to work on. So I don't think will settle for a non-reproducible goal now. Need to concentrate more on developer goals.

Best Practices

I almost cried when I read this. It's via Scott Hanselman's blog.

Patrick Cauldwell wrote down a list of commandments for the developer.

While it is a bit .Net centric, it can be modified to suit everyone. As long their into TDD, continuous integration, refactoring and patterns.

Manual Testing Does Not Scale

During the management review, we discussed the fact that although there's extensive testing, critical bugs slip through. One of the PMs, who is a big proponent of exploratory testing, said we're concentrating too much on writing test procedures, and less on testing.

I did actually produce the numbers, thanks to the newly implemented MS Project server. Although there's no pattern, since it's very new, during the last months, we see there's some basis to his argument.

So what happens here? are we writing too much? Currently, I think we've bumped into a scaling issue. The systems are complex, and in order to cover many (but not even most) test cases, we have to invest a lot of time in writing the procedures. This doesn't scale well, as the complexity grows exponentially. Even with enough time, there's no way to cover all possibilities, and this is where bugs slip by.

Is it possible to do some planning ahead to catch those bugs? The scaling issue rears up its ugly head again: Even with enough time, how much stuff that can go wrong is it possible to foresee? Not a lot, so the procedures do not find the bugs.

We are currently doing only manual testing, that takes a lot of time on one hand, and is not repeated, due to lack of time. I believe, automation of tests can improve quality by freeing the time for the manual exploratory testing. While it does take an effort to place the automation, it's worth it in terms of quality.

The Management Review Fallout

No, it's not over yet, and we are in preparations for the follow-up meeting. We've compiled a few more charts, and fixed a few mistakes, and will be ready to show those, as well discuss the topics we haven’t covered last time.

We can dig through these numbers a lot, and pull out a lot of findings – some of which will probably be interesting. But I feel we're doing these just to prove to the PMs we (QA) are right, namely, that the picture we painted is as grim as we displayed it. My boss shares this feeling too.

For instance: The charts we produced did not show how many non-reproducible bugs the projects had. In actuality here's what happens most of the time: The tester gets an abnormal behavior. She repeats the scenario, and it doesn't repeat. Due to time constraints the bug is reported as not-reproducible.

The PMs thought that since the numbers presented at the release time do not exclude the non-reproducible, it makes the total quality of the release look worse than it is if inspected through the "reproducible bugs glasses".

Obviously, this is wrong (and there is an abstract agreement not to dismiss them). Plus, you know that the bugs that you can't reproduce here, will be reproduced by your customer.

However, the numbers showed there aren't a lot of these non-reproducible bugs. So back to square one: We proved that the numbers still show the correct picture, but we invested time in proving this. I'm not sure what would happen if the results were different, but we're not there.

We feel we're thrashing, instead of working on some real improvement. We churn more data, but no action is taken.

We'll see how this turns out in the next meeting.

Management Review Time!

And I presented the different statistics and everyone asked for forgiveness and changed their way immediately.

Not. (Did you expect otherwise?)

There's a general feeling that things are not going well (discussing the results with the PMs prior to the meeting helped alleviate "unpleasant" surprises, and everyone knew what's going to be displayed). The mood of this year's meeting was different than the former ones - less bringing up memories (read: excuses). The meeting finally finished (after 5 great fun hours) before we had some conclusive actions.

And that would be the next step. There was some additional information people requested, which we'll collect and then decide what we'll do differently. Here's where the challenge lies - again this would be between me and the R&D Manager (round 23).

I've already suggested the "extreme" idea that we'll limit the amount of open bugs per person. If going over the limit, he'll have to close them prior to moving on to new features. People liked it (although the general feeling is that developers wouldn't like it - I say: tough), and it was even backed up by one of the PMs experience in a former company.

Another PM liked the idea of automated testing as a safety net, which just proves that if you repeat your message a lot it finally sinks in.

I'm still optimistic about making change. But, darn it, it takes a long time!

The Lightness of Raising a Web 2.0 Site

Guy Kawasaki, of former Apple fame and currently a VC person, who is very famous, raised a website called Truemors. This site get people to post and discuss rumors about everything. But that's not important.

The important stuff is how he did it. And how much it cost - 12K$. With all the difference of what we do in our company, it's still amazing of what you can do with a small sum. It helps if you are famous also.

With technology and skills today it takes a lot less than what you needed before. Just need to have a great idea.

I recommend Guy's blog for entrepreneurs and wanna-be-s. It's very interesting how e-business is done today.

My Favorite Podcasts - HanselMinutes

I guess this would be the second favorite, following Manager-Tools. How do I rate them? I guess it's how long I wait from publish to when I listen.

Scott Hanselman is a Boy Wonder. He's the chief architect in his company, works with Microsoft .Net technologies, but dabbles with many other technologies on the side. His podcasts (usually come out weekly) are about half hour long, which are interviews - either with him or by him. They cover technologies from Microsoft and others. Although there's a Microsoft affinity, there's no sucking up there. If MS did something wrong, he'll tell you.

Apart from Scott's blog, which is full of great information, there's also Scott's annual tool list. This guy makes productivity a holy word, and so, he dedicates his time to explore new ways to be more effective. The list of tools, some cost money, some free is updated yearly, and I regularly goes back to it, to see if I can fit some tools into my belt.
Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More