Superman VS Batman: The Agile Version

I used to be Superman.

I could do anything I wanted, and no one would tell me I’m wrong. For good reason: I usually wasn’t wrong.

I wasn’t born Superman. I worked hard at it. I learned a lot. I was leading by example. And when I was the smartest guy around, who actually accomplished things, I became Superman.

Let me tell you, it feels great.

But Is it good for business?

We all know about the bus factor, the number of people that if hit by a bus, halts the project. Super-people tend to hold knowledge, that other people don’t have. Maybe it’s ego or competition, or maybe it’s just because others don’t want that knowledge. They feel safe that if something happens, Superman will swoop in and save the day. Sometimes, Superman is not around. Then there’s trouble.

It gets worse, though.

Superman can be wrong. And when Superman makes a mistake, it can be a crucial mistake for the organization.

If our Superman is an architect, and he makes a bad architectural decision for the entire project, it can cost millions. Or, if she’s a team lead and decides on a new process for the team, a bad process can cause the team slow down, and sometimes break up.

Holy sidekick, Batman!

None of us are mistake-proof. Even if we, or others, tell ourselves that.

Agile talks about short feedback loops. Feedback is a good start, but it’s not enough.

People get to the Superman throne because of their expertise. We tend to appreciate expertise in a knowledge-based environment. In order to discuss something with Superman, we need to get high enough to his level, or at least that he notices us and takes our advice to heart.

In short, we don’t need Superman. We need Batman and Robin.

That’s not easy, from an organizational point of view. Imagine how long it took to get one superhero, and bring him to that level. Now we need two?

This is just like any other system with a single point of failure. It’s pure risk management. If you’re aware of the risk, you can take action. Organizations that manage their risks well, not only know who their supermen are. They also put strong, smart people next to them, to encourage discussion and to counter their superpowers.

Superman may not like it, and that’s ok.

After all, you need Batman.

Image source: http://moviepilot.com/posts/2014/03/17/batman-vs-superman-insider-scoop-on-batfleck-1280126

Why Testers Are Losing The ISO 29119 Battle

The ISO 29119 is making waves in the testing communities, regarding its content and necessity. Its focus on test planning and documentation gets modern testers to ask why.

I’ve worked in an ISO certified company, so maybe I can shed light from my past experience.

  • ISO standards are written by committees, which are made of people. We’re assuming that these people are experts, but mainly they are not. Some of them are academics who didn’t experience real life work.
  • There’s a big hidden assumption, that adhering to the standards ensures quality and stakeholder requirements. This is why management gives them the go ahead, apart from the certificate on the wall.
  • ISO certification are for organizations, not people. Most of the work in handling the acceptance, checks and certification are not by done by the practitioners. For example, in my former company, the person in charge of adhering to the ISO standards in R&D was not a developer or tester, and was outside on R&D. She was very good at what she did, but did not have any product development experience.
  • There’s a great fear of not failing to meet the ISO inspections, and therefore they rather go specifically according to a standard, which is specific about what they need to follow, if there is one.
  • ISO is really simple: Define your process, prove you adhere to it, get certified. In my former life, all development procedures were NOT dictated by ISO, but by the organization internally. They were derived indirectly from ISO 9000. To get the annual certificate we needed to show proof that we actually did what the procedures said.
  • As non-practitioners lead the process, there’s a need to create a common ground in terms of process and language. How would a general QA manager (non-tester) know that the development team are doing code reviews? The practitioners need to document it.  In the ISO 29119 case, the documentation is of test planning and documentation. These are the easiest to put in process form, as real testing skills are hard to put into process, it’s an easy way out. Don’t worry, it’s not just testing, but every high-skill suffers the same fate.
  • Of course, test plans and test result documentations do not guarantee any quality, or guide people to test better.
  • Finally, the environment is there too: Even if you’re not in a regulated industry, which is required to meet standards, you’ll find a competitor who does get the certificate, and will push your organization for taking this on too. The ISO institute gets money from the inspection so they have an interest to push the standards to be adapted by many.

Let’s summarize:

An organization wants the certification because they need it, or believe it will help their quality. They are looking for the best, simplest and less risky way to get it. The ISO organization gives the simplest common ground that works with “documentation as proof” concept. Everybody’s happy.

Except the practitioners.

The uproar against traditional ISO standards is not new. When we decided that code reviews is needed, we had to document them, just so we can meet the standard. I needed to sneak in “documentable” features and tweak the form to pass an inspection. And all I wanted is an abstract framework for doing code reviews, where people pair and inspect the code for problems, not the location of braces. But that’s life. I had to compromise.

There’s a whole system out there that works fine for most people. A vocal minority won’t change that.

The antidote does not come from regulation and standards. It comes from the responsibility of the organization to learn and improve.

Do you really want to be just a standard organization, or a bit more?

Everyday Unit Testing: New Version!

To know what’s in it, and what’s coming next, read the “Everyday Unit Testing” blog.

If you haven’t gotten your copy yet, go get it, while it’s still free.

Feedback is welcome!

Image source: http://pixabay.com/en/new-red-computer-icon-white-round-36994/

What Is An MVP?

We describe MVP as a minimal viable product, and sometimes we turn the definition into minimal marketable product. Regardless of how you look at it, it’s a product that gets attention, if not money.

The MVP can be a first stop in the life of a product. It can also be the last one – if there’s no interest in it, why bother developing it further?

An MVP can sell, if it’s interesting enough. Or it may provide feedback that requires us to pivot and create a new different MVP.

The thing is, we need to define what we’ll consider a success or failure, in order to move forward.

Last week, I promoted the “Everyday Unit Testing” book on Twitter:

I got this response from Peter Kofler:

At this time, the book included just the tools section. It’s just one chapter.

Is it an MVP?

Peter’s opinion is that a book about unit testing, with just a tool section, is not an MVP. He expects more from a book on such a large scope.

For me, one chapter was enough to see if there’s an interest enough for me to continue writing the book. In that context, the experiment is successful. In fact, although the book is free, there are already some customers who have paid for it.

So is it, or is it not an MVP?

The viability of the product is really contextual. It depends on the market, the time it comes out, competition, and many others. I believe Peter (having met himSmile) is a part of my market for the book , but for him, minimum means more.

What you need to do is define your expectations from your product. What the words Minimum, Viable and Product means for you. This is your experiment, and only you can check whether it has succeeded or failed, and what to do next. Once it’s out there, you get feedback. It may not be what you expected, maybe because the customers expected something else.

And then it’s your turn again.

In my case, the next chapter of “Everyday Unit Testing” will be out in  a few days.

Image souce: http://professionalgaminglife.com/pro-gamers/gamers-team-directory/teams-directory-m/most-valuable-player/

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them.

We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation.

That’s code isolation. But test isolation is different.

An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We've already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it.

Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests.

To give an example, I summon the witness for the prosecution: The Singleton.
Here’s some basic code using a singleton:

public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); } public static Counter GetInstance() { return instance; } public int GetValue() { return count++; } }

Pretty simple: The static instance is initialized in a call to Init. We can write these tests:

[TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); } [TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); }

Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case.

By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we've written them, but not necessarily.

While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run.

Do you know this trick?

public class MyTests: BaseTest { ///...

Why not put all common code in a base class, then derive the test class from it?

Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue.

Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”.

In order to avoid isolation problems:
  • Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test.

  • Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations.

  • Don’t inherit. Test base classes create interdependence and hurt isolation.

  • Mocking. Use mocking to control any shared dependency.

  • Clean up. Make sure that tests clean up after themselves. Or, instead before every run.
Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.

Test Attribute #9 – Deterministic

This is the 9th post in the Test Attribute series that started here. To learn more about testing, contact me.

I keep hammering on trust and how it’s crucial that we trust our tests. If a test is deterministic, it raises the level of our trust.  If it isn’t, we may question its result, which will be followed by questioning other tests as well.

Let’s start with a simple example. What’s wrong with this picture?

public class CreditCard { DateTime expirationDate = new DateTime(2017,12,15); public bool IsExpired() { return (DateTime.Now > expirationDate); } }

This is the tested code. So we can write this test:

[TestMethod]public void IsExpired_Today_False() { var card = new CreditCard(); Assert.IsFalse(card.IsExpired()); }

Which passes. Until a specific day arrives, and then it fails.
Or this code:

public class Settings { public string GetFirstValue() { var lines = File.ReadLines("config.sys"); return lines.First(); } }

And its test:

[TestMethod]public void FirstLine_OfConfigSys_ContainsDevice() { var settings = new Settings(); var result = settings.GetFirstValue().Contains("Device"); Assert.IsTrue(result); }

That passes, as long if finds the file where it expects it, or somebody edits it “the wrong way”.
These may look like dependencies we can’t trust all the time.  It’s more deep than that.

Never Assume

keep-calm-and-assume-nothing-6When we write code, we have many assumptions. The “happy path” we usually code first, when we assume nothing will go wrong. The other paths are the the paths where something might go wrong. Unfortunately, we can test and code only the things we think of. Actually, writing helps us think of other cases. The downside, is that if we use a library we didn’t write, we think less of the side effects.

To make sure we remove uncertainty from our tests, we need to remove being dependent on:
  • Geographical location
  • Time
  • Hardware speed, CPU, memory
  • Files and data, both existing and missing
The obvious solution is to mock the dependencies, or setup the system to be independent. By mocking the date we can check the “before” and “after” cases, or “plant” the file there and simulate reading it.

However, mocking comes with its own drawbacks, so you should consider the tradeoff.

And sometimes, mocking isn’t an option. We had a performance test, that was supposed to run under a certain limit. It started failing and passing on and off. Instead of checking the problem we said: “Oh, leave it, this test is just like that”. And we missed opportunities for fixing real problems.

To handle the root cause – assumptions, we’ll go back to our friends. Review the tests and code, and try to either validate the assumptions, or invalidate them (in that case, remove the code and test). Unverified assumptions may cause not only bugs. The code you’ve added will make it harder to add new code in the future. Use code that not only works, but that is also valid.

Next time: Isolation.

Test Attribute #8 – Truthiness

This is the 8th post, soon to be the 8th wonder of the world, in the Test Attribute series that started here. To learn more about testing, contact me.

I want to thank Steven Colbert for coining a word I can use in my title. Without him, all this would still be possible, had I not given up looking for a better word after a few minutes.

Tests are about trust. We expect them to be reliable. Reliable tests tell us everything is ok when they pass, and that something is wrong when they fail.

The problem is that life is not black and white, and tests are not just green and red. Tests can give false positive (fail when they shouldn’t) or false negative (pass when they shouldn’t) results. We’ve encountered the false positive ones before – these are the fragile, dependent tests.

The ones that pass, instead of failing, are the problematic ones. They hide the real picture from us, and erode our trust, not just in those tests, but also in others. After all, when we find out a problematic tests, who can say the others we wrote are not problematic as well?

Truthiness (how much we feel the tests are reliable) comes into play.

Dependency Injection Example

Or rather, injecting an example of a dependency.

Let’s say we have a service (or 3rd party library) our tested code uses. It’s slow and communication is unreliable. All the things that give services a bad name. Our natural tendency is to mock the service in the test. By mocking the service, we can test our code in isolation.

So, in our case, our tested Hotel class uses a Service:
public class Hotel { public string GetServiceName(Service service) { var result = service.GetName(); return "Name: " + result; } }

To know if the method works correctly, we’ll write this test:

[TestMethod]public void GetServiceName_RoomService_NameIsRoom() { var fakeService = A.Fake<Service>(); A.CallTo(() => fakeService.GetName()).Returns("Room"); var hotel = new Hotel(); Assert.AreEqual("Name: Room", hotel.GetServiceName(fakeService)); }

And everything is groovy.
Until, in production, the service gets disconnected and throws an exception. And our test says “B-b-b-but, I’m still passing!”.

The Truthiness Is Out There

Mocking is an example of obstructing the real behavior by prescriptive tests, but it’s just an example. It can happen when we test a few cases, but don’t cover others.

Here’s one of my favorite examples. What’s the hidden test case here?

public int Increment() { return counter++; }

Tests are code examples. They work to the extent of our imagination of “what can go wrong?” Like overflow, in the last case.

Much like differentiation, truthiness can not be examined by itself. The example works, but it hides a case we need another test for. We need to look at the collection of test cases, and see if we covered everything.

The solution doesn’t have to be a test of the same type. We can have a unit test for the service happy path, and an end-to-end test to cover the disconnection case. Of course, if you can think of other cases in the first place, why not unit test them?

So to level up your truthiness:
  • Ideate. Before writing the tests, and if you’re doing TDD – the code, write a list of test cases. On a notebook, a whiteboard, or my favorite: empty tests.

  • Reflect. Often, when we write a few test, new test cases come to mind. Having a visual image of the code can help think of other cases.

  • Beware the mock. We use mocks to prescribe dependency behavior in specific cases. Every mock you make can be a potential failure point, so think about other cases to mock.

  • Review. Do it in pairs. Four eyes are better than two.
Aim for higher truthiness. Higher trust in your tests will help you sleep better.

Next time: Deterministic.
For training and coaching on topics like this, contact me.

Image source: https://www.flickr.com/photos/yodelanecdotal/3611536356/
Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More