dimanche 21 décembre 2014

Your tests take too long to run

On a big application with lot of acceptance tests, integration tests, unit tests, performance tests, whatever you want tests you can wait sometime more than 1 hour to have your results.

YOU SHOULDN'T WAIT

There is not excuse. Whatever your change is, it doesn't justify that you have to waste one hour of your precious time. 
Sometime people says "During this time, I may use my brain to think about next tasks" you know what ? If I have to queue for one hour at the checkout, even if the cashier allow me to think about what I can do when I will be back at home, I just don't care and I want to burn the checkout, the cashier and the shop too.

SO BURN YOUR TESTS

Agree ? you don't want that anymore ? so what next

1. Refactor your test

Profiling tool are not only for your running application. For instance, I gained 50% of time changing a simple configuration in jBehave's steps retrieving in my current project. It wouldn't have happen without profiling.


2. Use crowd testing !!!


You are not alone on your journey there are guys that can help you : your colleagues and your continuous integration server.

Sound weird ? let me explain :
  • 90 % of the time you know witch test may have been impacted by your change. (You might want to look at the next chapter at this point but don't do it unless you want to come back here later and it will be all scrambled in your mind)
  • Run those tests on your machine. (You should take less than 5 minutes to do so) 
  • Run your new crazy maven/make/ant/gradle/yourOwnStuffThatIsSoCool goal that will run only a chunk of the tests (Your are 7 in your team run only 1/5 of all tests for instance)
  • Ask all your colleagues to do so.
  • Grab the result, fix your test

How to do that ? That's up to you, but some ideas :

Randomly chosen chunk. If you and your colleague run tests quite often maybe it is reasonable to choose randomly your tests to run and hope to have a failure quite soon

Here is some example with different size of chunk and number of builds required to have all tests run at least once.


chunck size 0.1



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 41% 65% 79% 88% 96%

By running only 1/10 of your tests, you have to run your build 50 times in order to make sure (>99 %) all tests are run. If your team of 7 guys are running tests 5 times a days your get >96% chance that the failing test will be run in one day.
On top of that your tests now take virtually 5 minutes to run


chunck size 0.2



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 67% 89% 96% 99% 100%


By running only 1/5 of your tests, you have to run your build 20 times in order to make sure (>99 %) all tests are run.
On top of that your tests now take virtually 10 minutes to run  

chunck size 0.5



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 97% 100% 100% 100% 100%

By running only 50% of your tests, you have to run your build 20 times in order to make sure (>99 %) all tests are run.
On top of that your tests now take virtually 30 minutes to run


The good point of this strategy is that whatever the frameworks you use it should be easy to implement that.


Round Robin chunk

Better but less easy to put in place is to choose your chunk with an incremental system. It means that all test in the chunk you run will be different than for the next build. It is not always easy to implement since it should be stateful. (Some ideas later but you have to continue reading)


Almost Round Robin chunk
 
Each user of your system builds a predefined chunk of tests but in this solution the idea is to insure that if only x% of the guys are required to run all tests. (Yes some guys dare to be sick in my team)

It means that we are doing a more tests than our chunk. For instance if I have a team of 5 guys and if i want to have all tests runs even if 1 guy is absent I have to run 1/5 of the test + 1/4 tests to cover others colleagues chuncks   = 45 % of all tests 

Here is a reminder table :




Number of colleagues


4 5 6 7 8 9 10
number of colleagues potentially dead 1 58% 45% 37% 31% 27% 24% 21%
2 92% 70% 57% 48% 41% 36% 32%
3
95% 77% 64% 55% 49% 43%
4

97% 81% 70% 61% 54%
5


98% 84% 74% 66%
6



98% 86% 77%
7




99% 88%
8





99%

So it's becoming quite interesting and worth investigation with large team.

Also one thing to understand is that you have to put an order to each members of your team and store it somewhere. (As a bash variable for instance)




Continuous distrubuted testing 

We can also think about having one way to have a daemon on each machine that run chunk of tests and communicate with each other to allow full test coverage in an acceptable time.

You can even think about having a predefined time to test and have a time limited chunk. (Chunk should take less than 5 minutes)

Most of the time you can even use your own VCS and commit one single little file that informs others what need to be tested in the next build
Also using your VCS you can easily verify that all the test has been run for a unique commit.

Although even if it s clearly cool, this solution is not easy to implement if you are thinking that you want test processes to be run in parallel

3. Test in priority what need to be tested

Challenge your project to know if it is really useful to run all tests always. does it worth it ? What are the benefits vs cost ? Don't be dogmatic

If you want to segregate your tests there's multiple strategies :
- Isolate business domains in your test (Annotation or other language artifact may help you)
- Create separate goal in your build tool to run only subsets of tests
- Link your test coverage tool with your build and VCS tool Unfortunatly it's your job here to do that, the idea is to say that if you modify one piece of code it should, most of the time, only impact new created test or test that previously cover the chunk of code you have modified.




samedi 20 décembre 2014

My java 8 presentation

I did this presentation at Swissquote

That was great. First time I've considered using gimp to do a presentation. Was a great idea.

download the pdf






jeudi 1 mai 2014

You say that a value is not used, prove it

Introduction 


Some time, we are doing tests by considering that some values are not relevant for the test so we put default value. But some time we actually rely on those default values to have our tests working. It can lead to hidden knowledge and false passed tests

For instance, if I create a class OldWomen. For my test instances It's reasonable to put 70 as a default value. It's an age that anyone (at least any developer) will consider as old.

Let say I have a medicine dispenser to test. I can have a story like this

Given a old woman with a cancer
when medicine dispenser is turned on
then the old woman remains in life


But for any reasons we decide that Women older than 80 always forget to plug in the medicine dispenser.

...our test will continue to work 



but in fact if the old woman is older than 80, she will die. (Indeed it's a simplified world)

Your test is not relevant and can make old women died.


Improving your default values


  1. We are all using continuous integration, a same test in a normal team will probably be run more than 10 times a day (Most of the time just to say "Hello I'm here and I'm working fucking well").
  2. We cannot test all cases 
  3. We haven't the brain enough big to foreseen all impacts of our changes (If we could, we basically won't do any test).

So let's make the machine do the job. Let's implement a default value generator !!!

By randomly generating default value you will probably failed one day (and probably the first day) If your code isn't safe for all values.

Obviously this approach has some limitation and some corner cases will not be threated.

Some ideas to implement those generators

  • Create a generic class  RandomValueGenerator<T>. You will be able to handle many cases this way
  • When you create String value take care of alpha numeric/ASCII characters
  • Generate nullable value if it's  relevant (for example 50% of the values returned will be null ). You can create a decorator to do that 
public static <U> RandomValueGenerator<U> nullable(final RandomValueGenerator<U> randomValueGenerator) {
  • Generate zero value if it's  relevant (for example 20% of the value returned will be 0)  you can also create a decorator to handle that. 
  • Support enum and allow to have only a limited number of value RandomValueGenerator.fromPossibleValues(E... values) 
  • Use it every where in your unit test or acceptance test (How much entity builder do you have in your code ?)
  • It's difficult to find the good ranges for values (Is it usefull to generate amount that are equals to 100000000000000 $ ?). But don't assert too mu
  • You can also combine some random generator :
With a method  RandomValueGenerator.combine(RandomValueGenerator... randomGenerator) 
 you can have
RandomValueGenerator.combine(
 RandomValueGenerator.fromPossibleValues(1.,0.,NaN),RandomValueGenerator.doubleValue())

Here it will provide you :
  • half of the time a value that is known to be error prone
  • half of the time a value that is actually random

Reproduce your tests

A good test should be reproductible. Fortunatly most of the random generator are initialize with a seed and given that seed all value are predictible. Then it's easy to have two mode :
  1. By default, the seed is randomly chosen and we reinitialized the random generator at the beginning of all test to this seed
  2. In order to reproduce a failing test, if a property is set with the seed, we use this seed.
Here is an exemple in Java
private final static Random random;

static {
if (System.getProperty("test.random.seed") == null) {
seed = new Random().nextLong();
} else {
seed = Long.getLong(System.getProperty("test.random.seed"));
}
random = new Random(seed);
}

public static void newTest() {
LOGGER.info("running test with random generator initialized with seed " + seed);
random.setSeed(seed);
}
You need to find a good way to call this newTest method on every tests. It's your job here !!! 

Non regression testing

At this point we tested that unexpected change in GIVEN values has no impact on THEN values but we also want to be sure that only expected values has changed on a test. Let's take again our old  example:

Given a old woman aged of 90 with a cancer
when medicine dispenser is turned on
then the old woman will die

But it's not only that the old woman will die that changed, there is also a good news, the electricity bill will not increase dramatically before people discover the old woman.

When you code do you always have in mind this kind of change ? It why non regression testing are so important. Although, I don't want to  change my test just to handle that case (It probably doesn't worth it) but I want to acknowledge that this change in behavior is ok.


There is no easy solution to solve this problem, but here some ideas :
- You need to compare state of the system at the end of the test with the expected state. It could be difficult or simple depending on your system. If your are in a db oriented project you may store the resulting database after the test.
- For the first run you cannot test anything (It's non regression testing... ) but you should store the results of this first test in the source base of the project.
- For non regression testing always use the same seeds. You can use lot of different seed but for each seed you should expect to have the same result.
- Reuse the seeds that have in the past broken your test.