Let me see - I developed (or was part of a team that developed):
1. Utility (Those were the days...)
3. Application. (This was a bigger project, which got bigger)
5. Framework (I should have made a U turn here).
8. Platform. (Made and designed parts. Thought that they would live forever...).
And I certainly can see where the second coming of the Utility can make life simpler. Again...
There's the main integration branch where people are checking in fixes. But some fixes are planned for later build, so they are not checked in. In a recent discussion, we also decided that the fixes are marked as done in the bug tracking system only when they are integrated into the main branch.
(Consider that a "private fix" is based on another fix. But the first is delivered into the main branch, but the other is not. So the latter is not really tested on the real environment, that could change in a few builds. And I don't want to think about if there's shared file between them.)
A staged delivery method creates opportunities for failure in the future (or even now).
I think the main point I got from the post is the cost calculation of integration of the private fixes compared to the cost of fixing the integration method. While I estimate much lower numbers here, at least in the long run (and after I check I'm sure it will come out also true in the short term) it pays off. The trouble is that there's never enough time to do the right thing.
By the way, Johanna's blog is a great resource for agile development and management. Also, in teamAgile.com, Roy Osherove collected and performed interviews, 3 of which are with Johanna. Check it out.
Danko compares real life estimation (where normal people take into account abnormal occurances) to project estimation (where normal project managers don't). This is so true in so many levels. Been there, and unfortunately done it.
I'm waiting for the follow up post. This Fibonacci series estimation looks mighty interesting.
Newbies should get feedback and be open to learning. Actually, experts also. Most of them became expert this way.
Learning TDD, like everything, is a process. Pick up the good habits, one or few at a time, and do not try everything at once - small victories.
And for some comments on the original blog post - I believe the post should be read as an opportunity for improvement, not disparaging the newbie's level of knowledge.
The way I react today to untestable code makes me realize how far I've become as a developer. So far off than the "just-make-it-work" manager I used to be.
You need a translator. If you're lucky, it's an automaton. Sure, you need to build it, but it will do most of the work for you. But if your luck's cup is not as full, then it's a human translator.
We have two system - one for bug tacking, and the other for test management. Although they both reside on a Domino server, and both are custom built, they do not know how to communicate. So in order for the testers to create a test plan (i.e. which bugs to verify with which procedure they run), they have to do some Excel data crunching.
Today after going with the new process one of the test team leads proposed, I did some modification for the database, in order to save some of the output in there. But in order to remove the manual translation, it will take a lot more (changes to the new DBs, then more scripts).
Could this have been seen as a possible work flow years and years ago when we built the database? Maybe, but I doubt it. If and when we go forward with replacing the database with more current tools, we know there must be integration then.
But for now, we keep the human in the loop. The cost is not too big, but it still a cost. And less automation means more room for errors. One more opportunity for improvement.
Scott describes himself as a geek. When he saw a CD, he wanted to see how it works (a quote from the last podcast "interns"). And you can read from his different blog post the passion he has for technology.
Passion is very important. I think it's the greatest motivator. Sure, you can have external motivation - money, dollars, shekels (I don't care, just gimme). But its the internal engine fire that keeps you going, and helps you jump through fiery hoops.
I like learning. A lot. I like to absorb information and apply it. It's a great feeling to say "I learned that all by myself. Then I used it to build that and it actually works!". Later it was achievements of my team I enjoyed. Today I also take pride when people I hired as teammates get promoted and achieve their goals. This is like saying "despite all I've done to this guy, he's succeeding!". Actually more like "So I didn't mess up too bad with him/her!". I spread my passion over different areas. But I still come back to my knowledge thirst.
Like Scott described in Blue Badge, the best is to do the things you care about the most, combined with what you do best, and get paid for it. This takes planning and alignment. And I'm working on those.
The auditor looked at them and asked "why aren't there names on the forms?". I said proudly (as it was my idea) "I want people to buy into the process, and if we want them to report bugs and other findings in their code, I'd rather let them stay anonymous and report, rather not report at all".
"Well, if you told them to do it - they would" he said. Yeah, right. I know my method actually worked, and it was based on showing the developers the value they get from the process, rather than just telling them.
Although you should ask people from different fields to get different view, remember to set your expectations accordingly.
At the end of the session, I reminded of a question that came up during the last ISO audit. The auditor, knowing I used to be a developer and PM asked me something like "you're not doing any development now, are you?" (Read: If you do, you probably mix things that should remain separate"). Of course, I said "not anymore". (I'm not the boat rocker I used to be).
I'm tempted to use "separation of concerns" here. Although I'm humble most of the time (OK, only while I'm sleeping), I see this opportunity to help with my experience, and it makes me feel good that people still recognize this. This is not detrimental to the development process. (OK, there was some boat-rocking involved). But mainly, it's using the resources to make a better product.
So let that be a lesson to you. Use whatever resources you have to solve the problem. Ask people you know, and maybe from different fields – you'll get a better answer.
I don't think time tracking enforces anything. Since it measures the time you were at work, not the time you are actually working, and not how well you manage your time.
In some cases, the time log showed things the PMs already knew - because of flex time, people would come in late and work late, not using team collaboration effectively. The PMs did not use this information to give feedback to these people, and accepted it. So no use there.
Since PMs don't really care how you spend your time, as long as you keep your commitments, it doesn't matter to them how much time you actually spent doing them. It could be said that if you went over your committed plan, than you don't spend your time effectively, but there's no direct correlation, and again, it matters more to keep your commitments.
When I looked into the project plans and compared the big variations, it was always the integrations that exceeded their planned time a lot (I read it as bad estimation). This is where Project comes in handy, as it shows you more detailed information.
So apart from tracking missing days, which are needed for different finance and HR calculations, I see the cancellation as a good thing - less maintenance and related costs of time log activities. I don't think that it sends a message of trust to the employees, while not losing anything.
I have a small task scheduled to run on a server to collect bug information. It ran every day at 3am, until a month ago. Since it's not on my radar, it took me 2 weeks to notice. It took a while, but then I remembered: The network policy made me change my password, which should also be updated in the Task Scheduler. I fixed that.
Then last Thursday I see it stopped working again. I already found a way to revert to my old password, but obviously, did not remember to reset the password. So I fixed that.
But that did not fix the problem. After thinking about it for an hour I made a decision: It's time to take a look at the logs. Of course the answer was there. (For those in the know, it's the equivalent of RTFM).
Apparently, something happened when we published a new version of the Notes database. This something was my responsibility – I should have checked that the needed view was there, and it wasn't erased. Which, it was.I set everything in place, and I hope the collection will continue tonight.
- Even after unit testing and system testing, the environment can still change the performance of the application. And you may not even know how.
- Make sure you have logging for error in the application
- Use the logs.
- Stop using "something apparently happened". It sounds even dumber when you say it yourself.
So for instance, if you wanted feedback on a feature, you would either communicate it verbally, send screen shots, or wait for the release.
Then the managers rebelled. We saw the obvious problem here - our customers needed to actually use the software to give feedback, sooner rather than later.
So we came up with a concept of a prototype. In early stages it would still be screenshots, but then it would be a mock up application, and later the application, but with not everything in-place.
And since we (I'm still in the PM role here) thought feedback was the most important issue here, except for showing the customer we actually made progress, we insisted on some flexibility. Less process, we said - wouldn't it be nice if there won't be a full release process?
And here we are today (I'm switching my hat now...)
One of the teams released a prototype, due to pressure by the customers to see progress. The decision to release was between the project manager and the product manager. The R&D manager was not in the meeting (although in the know), not to mention QA.
We do not have a process for releasing prototypes. It remained vague and open for interpretation. But instead of talking about bureaucracy and process, let's talk essence: I (QA) want to make sure that what we're releasing will not end up thrown back at us. We've been too many times where, because of external pressure to release, it came back to haunt us.
The build passed the sanity test, and the new features looked OK, but were not tested. There are bug fixes for high severity bugs, but some were not verified by the testers.
So on one hand, it looks OK for it's purpose - getting feedback on the new features. On the other hand, this is a very risky build, as it was not tested enough.
We need to formalize preliminary releases in some way and provide some kind of bar to measure releasability (I'm not sure if it's a real word, but I like it). I guess it would be as easy to measure releasability for full releases... The problem of course, is that we can set the bar as high as we want, it would always be external pressure that overrides it.
At least I made a few people think about the lightness of releasing.
It's very interesting, and I need to imagine how you TDD this thing. It is a leap outside the OO world, as some OO rules are broken for the benefit of readability.
[via Jeremy Miller]
Powered by ScribeFire.
A bug that was discovered was the trigger, and the three team members involved were asked to analyze what had happened, why and how it can be circumvented in the future. The task was given to them only, but since I already had intervened by giving them the draft form I prepared, one of them asked me to participate.
The first thing was to focus - they weren't sure how to start this. I started asking questions. I already had the background for the defect, and my goal was to stay as much away from technical stuff.
So starting at the beginning: Was the requirement clear? All agreed that it was. So we moved on to design processes and communication. All participants were sincere and open. So finally we came to the following points:
- The feature implementation was based on an engine developed in a former release. So the solution had to fit the former implementation. Ironically, the engine was generic in its design, but did not anticipate this usage scenario.
- The feature did not have a complete design, and the interfaces and content between the different components were not completely defined. So some impedance emerged, that caused the bug to appear.
- Although planned originally, the different members did not develop in parallel, but in different time segments. Although there was a feature lead (the one in charge of integration of the feature) there was no team integration - just the lead, with multi-tasked help from the others, who were working on other things at the time
- In order to make sure the feature was implemented correctly, one of the team members needed to run a test procedure for the feature, but ran just parts of it.
- The current design is not documented anywhere. But we can do brain surgery to remove it from the people's heads.
- Wrong reuse of component - The engine was not modified to enable this feature, and the solution was patched around it.
- The design was not complete at the "last responsible moment". No BDUF, but when needed, the people continued based on assumptions, rather than agreement.
- No focused development. No "done"-ness
- Current Knowledge is not being documented.
The funny thing is that the conclusions pertain to managerial directions. Apart from the responsibility for running the procedure, reuse methodology, planning, focused communication and knowledge acquisition are the responsibility of the team manager.
I'm going to fill the form tomorrow, and see how it looks. But apart from the documentation, I cannot see any action items. Just suggestions to change the process. It would be the project manager's decision on if and how to implement them.
I agree with most of the things he said. It is important to choose the correct tool for the correct job. In our projects we have C#, C++, VB (6), SQL, HTML and VB script. Usually, the developers we got on board were one of two kinds: C++/C# and VB/HTML/T-SQL. The distinction was their purpose in the team. The GUI side, where RAD was needed, came from the VB side, while algorithmic and hardware driver implementers came from the C++ side. So basically, the make up of the team, is actually divided by those skills.
Now we have a framework of tasks based on the skills. But the people coming from the RAD side of the road have less OO skills and low level API knowledge. So that dictates not only their current tasks, but also their next. So now we're having two career paths inside the development organization.
And this is OK to a point. I think the problem starts with maintenance. We chose a technology and we needed to stick with it. And coming up with people today who know VB6 gets harder. Not to mention that moving away from VB6 is hard because of the large code base, and VB is currently not so tempting for potentially employees.
So my point is this: Yes, it's OK to use the most fit technology (and while we're at it, I agree mostly with the "just-in-time optimization" idea although, I've pre-selected C++ for a driver communication over VB). But there's also a maintenance cost.
Today (if pressed really hard) I'd write in VB components with interfaces, and using factories in order for the ability to later unplug it, and place a C# component instead. I'd use C++ COM object in order to be able to replace it later, rather than using it with regular static function calls.
I'd rather consider the correct people with the correct skills to learn and prepare for new technologies, using TDD, patterns, etc. rather than arbitrarily dump a task on their desk and let them go ahead with this, just because they are C++ or VB people. This way I'll try to keep the future maintenance cost lower.
One of my greatest decisions ever (in my view, anyway) was on a project I managed a few years back.
Under pressure, and no time to waste – sounds familiar, I know – all we could do is produce more bugs. I asked the test team lead to assign testers to developers, to sit with them on their development machine to go over the different scenarios in the software. This was not even on a proper build – just to see that at least in high level, things worked as expected.
Soon enough builds got stable, bug count lowered, and the developers got out of the "babysitting" mentality into "we're doing a better job" mentality.
I can't take full credit for this (I was more of a persistent catalyst), but this method is currently being employed in another project by another team. Same pressure (maybe bigger), but at least they are starting to stabilize.
This is painful at the beginning. Apart from having a babysitter (which is sending a message of "I don't trust you" if miscommunicated, which I've done), the process raises bugs early, in front of the developer's face. So if not done properly,
But more communication is better, and flagging issues sooner is better. It will work for them, as long as they keep going like this. Good luck!
[via Brian Harry]
One of the things Tim talked about why working on email on a plane is more productive than doing that in the office. The two reasons, there are no replies coming in sidetracking you, and the second was that the environment confines you - you cannot do anything, so you focus on your work.
I see the resemblance to a couple of agile methods here - Setting up a fixed iteration confines you, and make you focus on the iteration goals. Also, in pair programming the pairing "confines" you and focuses both persons on the work.