From Legacy Code To Testable Code–Introduction

The word "legacy" has a lot of connotations. Mostly bad ones. We seem to forget that our beautiful code gets to “legacy“ status three days after writing it. Michael Feathers, in his wonderful book “Working Effectively With Legacy Code” defined legacy code as code that doesn't have tests, and there is truth in that, although it took me a while to fully understand it.

Code that doesn't have tests rots. It rots because we don't feel confident to touch it, we're afraid to break the "working" parts. Code rotting means that it doesn't change, staying the way we first wrote it. I'll be the first to admit that whenever I write code, it comes in its ugly form. It may not look ugly immediately after I wrote it, but if I wait a couple of days (or a couple of hours), I know I will find many ways to improve it. Without tests I can rely either on the automatic capabilities of refactoring tools, or pure guts (read: stupidity).

Most code doesn't look nice after writing it. But nice doesn't matter.

Because code costs, we'd like it to help us understand it, and minimize debugging time. Refactoring is essential to lower maintenance costs, and therefore tests are essentials.

And this is where you start paying

The problem of course, is that writing tests for legacy code is hard. Code is convoluted, full of dependencies both near and far, and without proper tests its risky to modify. On the other hand, legacy code is the one that needs tests the most. It is also the most common code out there - most of time we don't write new code, we add it to an existing code base.

We will need to change the code to test it, in most cases. Here are some examples why:

  • We can't create an instance of the tested object.
  • We can't decouple it from its dependencies
  • Singletons that are created once, and impact the different scenarios
  • Algorithms that are not exposed through public interface
  • Dependencies in base classes of the tested code.

Some tools, like PowerMockito in Java, or Typemock Isolator for C# allow us to bypass some of these problems, although they too come with a price: lower speed and code lock-down. The lower speed come from the way these tools work, which makes them slower compared to other mocking tools. The code lock-down comes as a side effect of extra coupling to the tested code - the more we use the power tools' capabilities, they know more about the implementation. This leads to coupling between the tests and the code, and therefore make the tests more fragile. Fragile tests carry a bigger maintenance cost, and therefore people try not to change them, and the code. While this looks like a technology barrier, it manifests itself, and therefore can be overcome, by procedure and leadership (e.g., once we have the tests, encourage the developers to improve the code and the tests).

Even with the power tools, we'll be left with some work. We might even want to do some work up front. Some tweaks to the tested code before we write the tests (as long as they are not risky), can simplify the tests. Unless the code was written simple and readable the first time. Yeah, right.

We’ll need to do some of the following:

  • Expose interfaces
  • Derive and implement classes
  • Change accessibility
  • Use dependency injection
  • Add accessors
  • Renaming
  • Extract method
  • Extract class

Some of these changes to the code is introducing "seams" into it. Through these seams, we can enter probes to check the impact of the code changes.  Other changes just help us make sense of it. We can if these things are refactoring patterns or not. If we apply them wisely, and more important SAFELY, we can prepare the code to be tested more easily and make the tests more robust.

In the upcoming posts I’ll look into these with more details.

Image source:

Testability != Good Design

It's a funny thing, testability. It's not really defined, or rather, it is defined poorly.

If testable code is code we can test, that means all code is like that. We can test it through unit tests. If it's hard we can move to functional tests. And if all fails, we can do manual testing. Even performance testing exercises the code. There might be code that tests cannot exercise, but then why did we write it in the first place?

When we talk about testability we usually mean "hard to test". That is a whole discussion by itself, because "hard to test" is also subjective. If we follow the theme of testing as an investment to minimize future maintenance costs, then "hard to test" translates to  "Costly to test" or "risky to test".

So here's another fun fact: When we have a well-factored code, it requires minimal changes, if any, for it to be tested. There’s no surprise that focusing on SRP makes code testable.

Because well-designed code is testable, we tend to correlate the two. Hard to test code is usually not factored well, and the two seem to go together. Moreover, we may infer that testable code leads to good design. It can in many cases, but not always.

Here's an example. We have a Customer class with a static method that gets the balance of an Account from a Bank:

public bool isOverdrawn(String name, int limit) { return (Bank.getAccount(name).getBalance() > limit); }

This is a very straightforward, readable method. Its use of a static method (getAccount) makes it "untestable" in Java and other languages. Again, by "untestable" we really mean "hard to test", which then translates to "hard to mock". In our case, using regular methods, it will be hard to mock the static method and control the input.

If we rule out use of PowerMockito, we need to modify our code to make it ”testable”. We can refactor it to pass the Account as a parameter. Once the Account is passed as an argument (really as an interface), we can mock the IAccount interface and pass it in. We now have testable code.

bool isOverdrawn(int limit, IAccount account) { return (account.getBalance() > limit); }

But has the design improved?

The method is readable as before. We exposed the type of account, although I’m not sure we even needed to know about it. We no longer have the name parameter, but instead, we needed the caller to extract the account before the method call, while originally it did not need to bother with the Bank at all.

The design has changed, but maybe not for the better. It definitely complicated the calling code.

Now let's try another design change for the sake of testability. This time instead of extracting a parameter, we’ll inject it with a dependency injection container (I’ll use Geuce). For that we need to modify the Customer class, and add:

private IAccount account; @Injectpublic void setAccount(IAccount account) { this.account = account; } bool isOverdrawn(int limit) { return (account.getBalance() > limit); }

Has its design improved now?

For the Customer class, we’ve added an unnecessary public setter, and a field we didn’t need before. If the calling code just used the setter, we’ll be in a similar condition to the last example, but using a DI framework makes the calling code, including wiring and configuration again, more complicated. (By the way, in .net it looks a bit better, but not by much.)

You may argue that sacrificing the simplicity of the calling code in order to make the design of the tested object is ok. But if you’re going to test the calling code, you’ll need to use the same tricks, and if you’re not, well, you just made it more complex and susceptible to bugs. You should test it.

Testable code is not inherently designed better

Sometimes the changes are risky and costly. We need to balance the need for testing with the risk, and how the tools we use impact the design.

And let’s remember: this code was not “untestable”. We set a constraint to not use PowerMockito, and tried to work around it. We could easily tested that code as-is.

For years I've heard that tools like PowerMockito and Typemock Isolator, encourage bad design, because they allow to test badly designed code. It sounds bad, but it maybe a better solution than making risky changes so you can just test. Sometimes the changes are not even risky, but will create a more complex code, where it should be simple.

Testing and design are broad skills every developer should have.

As long as you’re making a knowledgeable decision, not based on popular slogans, you’ll be fine.

Image source:

I Did Scrum and All I Got Was This Lousy Shirt

So this came up a few days ago: “Why Scrum Should Basically Just Die In A Fire”.

My first thought was: “Of course scrum is failing him, he’s not actually doing scrum.”

Second thought was: “He doesn’t understand agile at all, does he?”

Third thought was: “Gil, you’re an idiot.”.

Because I have been saying this was going to happen. Last time was on the “why agile is declining” sequel.

No Surprises

We shouldn’t be surprised that people are struggling with scrum or agile, because change is hard.

Bad experiences, coached or self-inflicted, guide our decisions. We connect the results to the experience, because if there’s a name on it, even better. Replace scrum, with kanban, SAFe or XP and I’m sure you’ll find people with similar experiences, who won’t touch agile again.

We know they are “doing it wrong”. That if only had the right coaching, everything would be better. It doesn’t matter.

Here’s a shocker: scrum never succeeds.

When you hear about successful scrum implementation from the people who went through it, they are no longer talk about bare-bone scrum. They talk about a working process that has similarities to scrum. The process is working for the team, and they are using “scrum” as the name they know. Scrum is a toolbox, and a small one at that. It can’t do the work, and it needs to be adopted to the team, the organization. To the people.

So regardless to what coaches may tell you, scrum can’t work, because it’s only a starting point. And we don’t declare wins and losses at the beginning.

The Conflict

On one hand, we deserve this. If scrum doesn’t deliver, and blows up in the face, regardless of where the fault lies, it’s because we, the agile community are responsible to explain scrum. If we communicated expectations better, and did not make it seem so simple, we wouldn’t be in this mess.

On the other hand, we, or should I say I, know that it works. Not as prescribed, but adapted specifically, gradually to how a team works.

It’s easy selling scrum as a silver bullet, because everybody is doing it. Just not the same.

For some it’s a miserable experience, and they call us on it.

The last thing we should do, is blame them for “not understanding agile”.



Image source:

The Measure Of Success

What makes a successful project?

Waterfall project management tells us it’s about meeting scope, time and cost goals.

Do these success metrics also hold true to agile projects?

Let’s see.

  • In an agile project we learn new information all the time. It’s likely that the scope will change over time, because we find out things we assumed the customer wanted were wrong, while features we didn’t even think of are actually needed.
  • We know that we don’t know everything when we estimate scope, time and budget. This is true for both kinds of projects, but in agile projects we admit that, and therefore do not lock those as goals.
  • The waterfall project plan is immune to feedback. In agile projects, we put feedback cycles into the plan so we will be able to introduce changes. We move from “we know what we need to do” to “let’s find out if what we’re thinking is correct” view.
  • In waterfall projects, there’s an assumption of no variability, and that the plan covers any possible risk. In fact, one small shift in the plan can have disastrous (or wonderful) effects on product delivery.
  • Working from a prioritized backlog in an agile project, means the project can end “prematurely”. If we have a happy customer with half the features, why not stop there? If we deliver a smaller scope, under-budget and before the deadline, has the project actually failed?
  • Some projects are so long, that the people who did the original estimation are long gone. The assumptions they relied on are no longer true, technology has changed and the market too. Agile projects don’t plan that long into the future, and therefore cannot be measured according to the classic metrics.
  • Quality is not part of the scope, time and cost trio, and usually not set as a goal. Quality is not easily measured, and suffers from the pressure of the other three. In agile projects quality is considered is first-class citizen, because we know it supports not only customer satisfaction, but also the ability of the team to deliver in a consistent pace.

All kinds of differences. But they don’t answer a very simple question:

What is success?

In any kind of project, success has an impact. It creates happy customers. It creates a new market. It changes how people think and feel about the company. And it also changes how people inside the company view themselves.

This impact is what makes a successful project. This is what we should be measuring.

The problem with all of those, is that they cannot be measured at the delivery date, if at all. Cost, budget, and scope maybe measureable at the delivery date, including against the initial estimation, but they are not really indicative of success. In fact, there’s a destructive force within the scope, time and cost goals: They come at the expense of others, like quality and customer satisfaction. If a deadline is important, quality suffers. We’ve all been there.

The cool thing about an agile project, is that we can gain confidence we’re on the right track, if customers were part of the process, and if the people developing the product were aligned with the customer’s feedback. The feedback tells us early on if the project is going to be successful, according to real life parameters.

And if we’re wrong, that’s good too. We can cut our losses and turn to another opportunity.

So agile is better, right?

Yes, I’m pro-agile.

No, I don’t think agile works every time.

I ask that you define your success goals for your product and market, not based on a methodology, but on what impact it will make.

Only the you can actually measure success.

Stairway to Heaven

Whenever we learn a new skill, and go deeper, it feels like we’re climbing a stairway to mastery.

A few years ago, I described how people get into unit testing, and their journey along the path, as walking up a staircase. I’ve seen many people go through this.

Our stairs are very slippery. In every step, we can slip and fall off the stairs. That’s why we get less and less people on the top stairs. Persistence and discipline keep us moving up.

Most people start by hearing about unit testing. They’ve  heard about it in a conference, or read about it in a blog, but never tried it before. Early adopters don’t stay on that step for long, because they are eager to try it. But most people stop there and wait until an opportunity comes up to try it. This can be a new project, or a new team that has the capability to learn. Then they jump in the pool.

When they get to the trial stage, lots of people fail. Specifically, it’s not that they fail, but they conclude that unit testing is not for them. It’s easy to find excuses for this – technology, time for learning, and the fact that it’s damn hard. So many slip off the stairs at this point. They will probably not start climbing back again, and will dissuade others from taking the trip.

Those that have succeeded and managed to see the value of unit testing, usually are crazy about it. They bump into another problem: Other people don’t understand them. It’s really hard to imagine going back to no tests, they think: “How do these people live? Why aren’t they using this simple but valuable tool”?

There’s an actual dissonance after you’ve seen the light, and it feels worse because it’s really hard to transfer this feeling to other people. When successful, unit testing becomes a team practice, sometimes even an organization practice.   Try to explain it to your friends in another organization and they will look at you funny.

The climbers (now we’re talking dwindling numbers) who actually get to learn and use TDD, get to it after they have some experience in test-after unit testing.  Usually you don’t get to start with TDD, and when you do, like it happened to me, it’s bound to fail. To assimilate this practice takes experience and discipline.

Plain TDD is hard. If you’re working with legacy code, you have a lot of work ahead of you. Sometimes, work you’re not willing to put in. It’s ok, but that’s where the cycle of failure begins again. With the same excuses: technology, time, and it’s hard. If you happened to succeed, urging others to join is bound to fail.

The few, the happy few, who manage to get on top, are in testing nirvana. Not only they control how they work, they influence others to work according to what they find as a best practice. Not only is it very hard to find someone like that, they aren’t really happy to leave their place.

What happens if one of this group moves to another organization, where the people don’t write tests? Even for a good reason to train them to start moving up their staircase?

Culture shock. It’s the same clash we saw before, but multiplied. Before, you’ve seen the light and others didn’t. Now you are holding the flashlight, and people still don’t get it.

What does it all mean?

A couple of things.

  • It is a journey. It’s long, hard and most of all, personal. Unit testing not just a set of tools, you improve and learn, and it’s hard to convey this progress to others.
  • We tend to work with people in our level, or at least at the same openness level. We struggle otherwise
  • Switching between teams causes an actual system shock. There’s a gap people need to reach over and they usually don’t.

It also means that an organizational decision of “everyone starts writing tests tomorrow” is bound to fail. Select the “ready” group first, those that are willing to try, and help them move forward. Plant the seed of awareness in other teams, so they will be ready. When they are, get mentors from the advanced (but not too advanced) teams. Grow everything slowly together.

It is a long tough process.

Get someone , like me, to help manage the process and mentor the people.

Image source:

Everyday Unit Testing – New Chapter Released!

It’s the “Unit Testing Economics” chapter with an insight on how we actually make decisions in the unit testing adoption process. There’s also a bonus section on the Organization Strategy.

I’m planning the end the “free” period soon, so get it while you can, or else… pay money.

Read more about it the “Everyday Unit Testing” blog.

How To Waste Estimations

We like numbers because of their symbolic simplicity. In other words, we get them. They give us certainty, and therefore confidence.

Which sounds more trustworthy: “It will take 5 months” or “It will take between 4 to 6 months”?

The first one sounds more confident, and therefore we trust it more. After all, why don’t you give me a simple number, if you’re not really sure about what you’re doing?

Of course, if you knew where the numbers came from, you might not be that confident.

Simple and Wrong

In our story, the 5 month estimation was not the first one given. The team deliberated and considered, and decided that in order to deliver a working, tested feature, they would need 8 months.

“But our competitors can do it in 4 months.” shouted the manager.

“Well. maybe they can. Or maybe, they will do it in half the time, with half the quality” said the team.

“It doesn’t matter, because they will get the contract!” the manager got angrier.

The team had nothing to say. They left the room for half an hour. Then the team-lead came back and said very softly: “We’ll do it in 5 months”.


The project plan said 5 months. No one remembered the elaborate discussion that went down the toilet.

Estimates are requested for making decisions. In our story, the decision was to overrule them. There’s not really isn’t a way of knowing who was right. Many companies thrived by having a crappy v1 out, and then fixing things in v2. They might not have gotten to v2 if they had done v1 “properly”. Then again, it might be that they bled talent, because the developers were unhappy, or not willing to sacrifice quality. Who knows.

The point is: the effort put into estimation should be small enough to provide the numbers to management. If the team had reached the “8 months” answer after 1/2 day, rather than after dissecting the project dependencies up front for 2 weeks, they would have more time to work on the actual project.

By the way, they might not get the go ahead. And they would work 2 weeks more on other projects. That’s good too.

Don’t waste time getting to good enough estimates. Estimate and move on to real work.

PS: Based on a true story.

Image source:

Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More