What Is An MVP?

We describe MVP as a minimal viable product, and sometimes we turn the definition into minimal marketable product. Regardless of how you look at it, it’s a product that gets attention, if not money.

The MVP can be a first stop in the life of a product. It can also be the last one – if there’s no interest in it, why bother developing it further?

An MVP can sell, if it’s interesting enough. Or it may provide feedback that requires us to pivot and create a new different MVP.

The thing is, we need to define what we’ll consider a success or failure, in order to move forward.

Last week, I promoted the “Everyday Unit Testing” book on Twitter:

I got this response from Peter Kofler:

At this time, the book included just the tools section. It’s just one chapter.

Is it an MVP?

Peter’s opinion is that a book about unit testing, with just a tool section, is not an MVP. He expects more from a book on such a large scope.

For me, one chapter was enough to see if there’s an interest enough for me to continue writing the book. In that context, the experiment is successful. In fact, although the book is free, there are already some customers who have paid for it.

So is it, or is it not an MVP?

The viability of the product is really contextual. It depends on the market, the time it comes out, competition, and many others. I believe Peter (having met himSmile) is a part of my market for the book , but for him, minimum means more.

What you need to do is define your expectations from your product. What the words Minimum, Viable and Product means for you. This is your experiment, and only you can check whether it has succeeded or failed, and what to do next. Once it’s out there, you get feedback. It may not be what you expected, maybe because the customers expected something else.

And then it’s your turn again.

In my case, the next chapter of “Everyday Unit Testing” will be out in  a few days.

Image souce: http://professionalgaminglife.com/pro-gamers/gamers-team-directory/teams-directory-m/most-valuable-player/

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them.

We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation.

That’s code isolation. But test isolation is different.

An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We've already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it.

Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests.

To give an example, I summon the witness for the prosecution: The Singleton.
Here’s some basic code using a singleton:

public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); } public static Counter GetInstance() { return instance; } public int GetValue() { return count++; } }

Pretty simple: The static instance is initialized in a call to Init. We can write these tests:

[TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); } [TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); }

Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case.

By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we've written them, but not necessarily.

While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run.

Do you know this trick?

public class MyTests: BaseTest { ///...

Why not put all common code in a base class, then derive the test class from it?

Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue.

Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”.

In order to avoid isolation problems:
  • Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test.

  • Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations.

  • Don’t inherit. Test base classes create interdependence and hurt isolation.

  • Mocking. Use mocking to control any shared dependency.

  • Clean up. Make sure that tests clean up after themselves. Or, instead before every run.
Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.

Test Attribute #9 – Deterministic

This is the 9th post in the Test Attribute series that started here. To learn more about testing, contact me.

I keep hammering on trust and how it’s crucial that we trust our tests. If a test is deterministic, it raises the level of our trust.  If it isn’t, we may question its result, which will be followed by questioning other tests as well.

Let’s start with a simple example. What’s wrong with this picture?

public class CreditCard { DateTime expirationDate = new DateTime(2017,12,15); public bool IsExpired() { return (DateTime.Now > expirationDate); } }

This is the tested code. So we can write this test:

[TestMethod]public void IsExpired_Today_False() { var card = new CreditCard(); Assert.IsFalse(card.IsExpired()); }

Which passes. Until a specific day arrives, and then it fails.
Or this code:

public class Settings { public string GetFirstValue() { var lines = File.ReadLines("config.sys"); return lines.First(); } }

And its test:

[TestMethod]public void FirstLine_OfConfigSys_ContainsDevice() { var settings = new Settings(); var result = settings.GetFirstValue().Contains("Device"); Assert.IsTrue(result); }

That passes, as long if finds the file where it expects it, or somebody edits it “the wrong way”.
These may look like dependencies we can’t trust all the time.  It’s more deep than that.

Never Assume

keep-calm-and-assume-nothing-6When we write code, we have many assumptions. The “happy path” we usually code first, when we assume nothing will go wrong. The other paths are the the paths where something might go wrong. Unfortunately, we can test and code only the things we think of. Actually, writing helps us think of other cases. The downside, is that if we use a library we didn’t write, we think less of the side effects.

To make sure we remove uncertainty from our tests, we need to remove being dependent on:
  • Geographical location
  • Time
  • Hardware speed, CPU, memory
  • Files and data, both existing and missing
The obvious solution is to mock the dependencies, or setup the system to be independent. By mocking the date we can check the “before” and “after” cases, or “plant” the file there and simulate reading it.

However, mocking comes with its own drawbacks, so you should consider the tradeoff.

And sometimes, mocking isn’t an option. We had a performance test, that was supposed to run under a certain limit. It started failing and passing on and off. Instead of checking the problem we said: “Oh, leave it, this test is just like that”. And we missed opportunities for fixing real problems.

To handle the root cause – assumptions, we’ll go back to our friends. Review the tests and code, and try to either validate the assumptions, or invalidate them (in that case, remove the code and test). Unverified assumptions may cause not only bugs. The code you’ve added will make it harder to add new code in the future. Use code that not only works, but that is also valid.

Next time: Isolation.

Test Attribute #8 – Truthiness

This is the 8th post, soon to be the 8th wonder of the world, in the Test Attribute series that started here. To learn more about testing, contact me.

I want to thank Steven Colbert for coining a word I can use in my title. Without him, all this would still be possible, had I not given up looking for a better word after a few minutes.

Tests are about trust. We expect them to be reliable. Reliable tests tell us everything is ok when they pass, and that something is wrong when they fail.

The problem is that life is not black and white, and tests are not just green and red. Tests can give false positive (fail when they shouldn’t) or false negative (pass when they shouldn’t) results. We’ve encountered the false positive ones before – these are the fragile, dependent tests.

The ones that pass, instead of failing, are the problematic ones. They hide the real picture from us, and erode our trust, not just in those tests, but also in others. After all, when we find out a problematic tests, who can say the others we wrote are not problematic as well?

Truthiness (how much we feel the tests are reliable) comes into play.

Dependency Injection Example

Or rather, injecting an example of a dependency.

Let’s say we have a service (or 3rd party library) our tested code uses. It’s slow and communication is unreliable. All the things that give services a bad name. Our natural tendency is to mock the service in the test. By mocking the service, we can test our code in isolation.

So, in our case, our tested Hotel class uses a Service:
public class Hotel { public string GetServiceName(Service service) { var result = service.GetName(); return "Name: " + result; } }

To know if the method works correctly, we’ll write this test:

[TestMethod]public void GetServiceName_RoomService_NameIsRoom() { var fakeService = A.Fake<Service>(); A.CallTo(() => fakeService.GetName()).Returns("Room"); var hotel = new Hotel(); Assert.AreEqual("Name: Room", hotel.GetServiceName(fakeService)); }

And everything is groovy.
Until, in production, the service gets disconnected and throws an exception. And our test says “B-b-b-but, I’m still passing!”.

The Truthiness Is Out There

Mocking is an example of obstructing the real behavior by prescriptive tests, but it’s just an example. It can happen when we test a few cases, but don’t cover others.

Here’s one of my favorite examples. What’s the hidden test case here?

public int Increment() { return counter++; }

Tests are code examples. They work to the extent of our imagination of “what can go wrong?” Like overflow, in the last case.

Much like differentiation, truthiness can not be examined by itself. The example works, but it hides a case we need another test for. We need to look at the collection of test cases, and see if we covered everything.

The solution doesn’t have to be a test of the same type. We can have a unit test for the service happy path, and an end-to-end test to cover the disconnection case. Of course, if you can think of other cases in the first place, why not unit test them?

So to level up your truthiness:
  • Ideate. Before writing the tests, and if you’re doing TDD – the code, write a list of test cases. On a notebook, a whiteboard, or my favorite: empty tests.

  • Reflect. Often, when we write a few test, new test cases come to mind. Having a visual image of the code can help think of other cases.

  • Beware the mock. We use mocks to prescribe dependency behavior in specific cases. Every mock you make can be a potential failure point, so think about other cases to mock.

  • Review. Do it in pairs. Four eyes are better than two.
Aim for higher truthiness. Higher trust in your tests will help you sleep better.

Next time: Deterministic.
For training and coaching on topics like this, contact me.

Image source: https://www.flickr.com/photos/yodelanecdotal/3611536356/

Everyday Unit Testing

It’s a book!

I’ve toyed with the idea for a few years, but only during the last few months, I’ve given that a real push.

Everyday unit testing” is a collection of ideas, examples, experiences, techniques and other things that I’ve collected and tried over the years with people who started testing. Most people think that unit testing starts and ends with a test framework. Some are not aware of all the things that relate to unit testing, skills like refactoring and naming, keeping tests readable, when and when not to use mocking. A bunch of stuff that rarely appears in one place, if at all. That’s where Everyday Unit Testing comes in.

But above all, it’s a beginning of an adventure for me.

The book is an agile product. It will be published incrementally, and I want your feedback to help me change it for the better. In fact, your feedback will tell me if this book is really needed as I think it is.

I’ve just published one chapter (Tools), along with a skeleton table-of-contents. The image is intentionally temporary. Hell, the book site is minimal. 

Like a true MVP.

For the time being, the book is free. I need help, and I hope you can aid me by reading what I write, and spare no criticism.

Even good criticism.

I’m going to continue blogging about all this stuff, and some of it will go into the book. In the everydayunittesting.com site, I’ll blog about the progress of the book, and book related stuff.

I’m excited. Can you tell?

Test Attribute #7 – Footprint

This the 7th post about Test Attributes that started off as half of a power-couple of series,  “How to test your tests” post. If you need training on testing, contact me.

When we talk footprint, we’re really talking about isolation. Isolation is key to trust.

Wait, What?

The “checking” part of testing, is really about trust. We check, because we want to make sure our system works as we anticipated. Therefore, we build a suite of tests that confirm our assumptions about the system. And every time we look at the test results, we want to be sure a 100% these tests are not lying to us.

We need to trust our tests, because then we won’t need to recheck every time. We’ll know failure points at a real problem. And that mass of tests we’ve accumulated over the years was not an utter waste of our time.

We need to know that no matter:
  • Where in the world the test runs
  • When the test runs
  • On which kind of machine the test runs
  • Who runs the test
  • How many times we run it
  • In what order we run it, if run alone or in sequence
  • And any environmental conditions we run it
The result will not be affected.

Isolation means we can put a good chunk of trust in our tests, because we eliminate the effect of outside interference.

If we ensure total isolation, we’ll know that not only does Test XYZ has reliable results, it also doesn’t affect the results of any other test.
There’s only one small problem.

We cannot ensure total isolation!

Is the memory status the same every time we run the test?
Did our browser leave temporary files around the last time that might impact how full the disk is?
Did the almighty garbage collector cleared all the unused objects?
Was it the same length of time since system reboot?
We don’t know.

Usually these things don’t matter. Like in real life, we’re good at filtering out the un-risky stuff, that can have an affect, but usually doesn’t.

So we need good-enough isolation. And that means minimal controllable footprint.
  • Every memory allocated by the test should be freed
  • Every file the test created should be deleted.
  • Every file the test deleted should be restored.
  • Every changed registry key, environment variable, log entry, etc…
I’m using Test, but I actually mean Test AND Code. So if the tested code does something that requires rollback, the test needs to do it as well.

Mister, You Are A Terrible Isolationist!

It’s not the first time I’ve been called that.

“"Sounds a bit extreme, isn’t it? I mean, if I test against a “dirty” database, and don’t rely on any previous state, am I doing things wrong? Do I need to start always from the same database?
Well, yes and no.

If you’ve analyzed the situation, and have written a test that doesn’t rely on previous state, that means that you’ve taken isolation into account already. So a suite of tests that pile data on the database and don’t clean it up, are in a context that doesn’t care about footprint.

The question is – what if the test fails? Since you’ve allowed the tests to mark their territory, you now have tests that are hard to reproduce. That will cost you in debugging, and maybe in even resolving the problem.

As always, it’s an ROI balance of risk analysis and mitigation.  The thing is you need to be aware of the balance, when making the decision.

Next time: Truthiness.

For training and coaching on topics like this, contact me.
Image source: https://www.flickr.com/photos/scarto/3911280619/

Test Attribute #6 - Maintenance

This the 6th post about Test Attributes that started off with the supermodel of series,  “How to test your tests” post. If you need training on testing, contact me.

I always hated the word “maintainability” in the context of tests. Tests, like any other code are maintainable. Unless there comes a time, where we decide we can’t take it anymore, and the code needs a rewrite, the code is maintainable. We can go and change it, edit or replace it.

The same goes for tests. Once we’ve written them, they are maintainable.

So why are we talking about maintainable tests?

The trouble with tests is that they are not considered “real” code. They are not production code.

Developers, starting out on the road to better quality, seem to regard tests not just as extra work, but also second-class work. All activities that are not directed at running code on production server, or a client computer, are regarded as “actors in supporting roles”.

Obviously writing the tests has an associated future cost. It’s a cost on supporting work, which is considered less valuable.

One of the reasons developers are afraid to start writing tests is the accumulated multiplier effect: “Ok, I’m willing to write the tests, which doubles my work load. I know that this code is going to change in the future, and therefore I’ll have to do double the work, many times in the future. Is it worth it?”

Test maintenance IS costly

But not necessarily because of that.

The first change we need to do is a mental one. We need to understand that all our activities, including the “supporting” ones, are all first-class. That also includes the test modifications in the future: After all, if we’re going to change the code to support that requirement, that will require tests for that requirement.

The trick is to minimize the effort to a minimum. And we can do that, because some of that future effort is waste that we’re creating now. The waste happens when the requirements don’t change, but the tests fail, and not because of a bug. We then need to fix the test, although there wasn’t a real problem. Re-work.

Here’s a very simple example, taken from the Accuracy attribute post:

[Test]public void AddTwoPositiveNumbers_GetResult() { PositiveCalculator calculator = new PositiveCalculator(); Assert.That(calculator.Add(2, 2), Is.EqualTo(4)); }

What happens if we decide to rename the PositiveCalculator to Calculator?  The test will not compile. We’ll need to modify the test in order to pass.

Renaming stuff doesn’t seem that much of a trouble, though – we’re relying on modern tools to replace the different occurrences. However, this is very dependent on tools and technology . If we did this in C# or in Java, there is not only automation, but also quick feedback mechanisms that catch this, and we don’t even think we’re maintaining the tests.

Imagine you’d get the compilation error only after 2 hours of compiling, rather than immediately after you’ve done the changes. Or only after the automated build cycle. The further we get from automation and quick feedback, we tend to look at the maintenance as a bigger monster.

Lowering maintenance costs

The general advice is: “Don’t couple your tests to your code”.

There’s a reason I chose this example: Tests are always coupled to the code. The level of coupling, and the feedback mechanisms we use effect how big these “maintenance” tasks are going to be. Here are some tips for lowering the chance of test maintenance.
  • Check outputs, not algorithms. Because tests are coupled to the code, the less implementation details the test knows about, the better. Robust tests do not rely on specific method calls inside the code. Instead, they treat the tested system as a black box, even though they may know how it’s internally built. These tests, by the way, are also more readable.

  • Work against a public interface. Test from the outside and avoid testing internal methods. We want to keep the internal method list (and signature) inside our black box. If you feel that’s unavoidable, consider extracting the internal method to a new public object.

  • Use the minimal amount of assert. Being too specific in our assert criteria, especially when using verification of method calls on dependencies, can lead to breaking tests without a benefit. Do we need to know a method was called 5 times, or that it was called at least once? When it was called, do we need to know the exact value of its argument, or maybe a range suffices? With every layer of specificity, we’re adding opportunities for breaking the test. Remember we with failure, we want information to help solve the problem. If we don’t gain additional information from these asserts, lower the criteria.

  • Use good refactoring tools. And a good IDE. And work with languages that support these. Otherwise, we’re delaying the feedback on errors, and causing the cost of maintenance to rise.

  • Use less mocking. Using mocks is like using x-rays. They are very good at what they do, but over-exposure is bad. Mocks couple the code to the test even more. They allow us to specify internal implementation of the code in the test. We’re now relying on the internal algorithm, which can change. And then our test will need some fixing.

  • Avoid hand-written mocks. The-hand written ones are the worst, because unless they are very simple, it is very easy to copy the behavior of the tested code into the mocks. Frameworks encourage setting the behavior through the interface.
There’s a saying: Code is a liability, not an asset. Tests are the same – maintenance will not go away completely. But we can lower the cost if we stick to these guidelines.

Next up: Footprint.

For training and coaching on testing and agile, contact me.

Image source: http://www.business2community.com/content-marketing/how-super-mario-would-market-his-plumbing-business-in-2013-0423630#!bnPrC4
Related Posts Plugin for WordPress, Blogger...
Twitter Delicious Facebook Digg Stumbleupon More