Adopt OpenJDK & Java community: how can you help Java !

Introduction

I want to take the opportunity to show what we have been doing in last year and also what we have done so far as members of the community. Unlike other years I have decided to keep this post less technical compare to the past years and compared to the other posts on Java Advent this year.

InTheBeginning

This year marks the fourth year since the first OpenJDK hackday was held in London (supported by LJC and its members) and also when the Adopt OpenJDK program was started. Four years is a small number on the face of 20 years of Java, same goes to the size of the Adopt OpenJDK community which forms a small part of the Java community (9+ million users). Although the post is non-technical in nature, the message herein is fairly important for the future growth and progress of our community and the next generation developers.

Creations of the community

Creations from the community

Over the many months a number of members of our community contributed and passed on their good work to us. In no specific order I have enlisted these picking them from memory. I know there are more to name and you can help us by sharing those with us (we will enlist them here).  So here are some of those that we can talk about and be proud of, and thank those who were involved:

  • Getting Started page – created to enabled two way communication with the members of the community, these include a mailing list, an IRC channel, a weekly newsletter, a twitter handle, among other social media channels and collaboration tools.
  • Adopt OpenJDK project: jitwatch – a great tool created by Chris Newland, its one of its kind, ever growing with features and helping developers fine-tune the performance of your Java/JVM applications running on the JVM.
  • Adopt OpenJDK: GSK – a community effort gathering knowledge and experience from hackday attendees and OpenJDK developers on how to go about with OpenJDK from building it to creating your own version of the JDK. Many JUG members have been involved in the process, and this is now a e-book available in many languages (5 languages + 2 to 3 more languages in progress).
  • Adopt OpenJDK vagrant scripts – a collection of vagrant scripts initially created by John Patrick from the LJC, later improved by the community members by adding more scripts and refactoring existing ones. Theses scripts help build OpenJDK projects in a virtualised container i.e. VirtualBox, making building, and testing OpenJDK and also running and testing Java/JVM applications much easier, reliable and in an isolated environment.
  • Adopt OpenJDK docker scripts – a collection of docker scripts created with the help of the community, this is now also receiving contributions from a number of members like Richard Kolb (SA JUG). Just like the vagrant scripts mentioned above, the docker scripts have similar goals, and need your DevOps foo!
  • Adopt OpenJDK project: mjprof – mjprof is a Monadic jstack analysis tool set. It is a fancy way to say it analyzes jstack output using a series of simple composable building blocks (monads). Many thanks to Haim Yadid for donating it to the community.
  • Adopt OpenJDK project: jcountdown – built by the community that mimics the spirit of ie6countdown.net. That is, to encourage users to move to the latest and greatest Java! Many thanks to all those involved, you can already see from the commit history.
  • Adopt OpenJDK CloudBees Build Farm – thanks to the folks at CloudBees for helping us host our build farm on their CI/CD servers. This one was initially started by Martijn Verburg and later with the help of a number of JUG members have come to the point that major Java projects are built against different versions of the JDK. These projects include building the JDKs themselves (versions 1.7, 1.8, 1.9, Jigsaw and Shenandoah). This project has also helped support the Testing Java Early project and Quality  Outreach program.

These are just a handful of such creations and contributions from the members of the community, some of these projects would certainly need help from you. As a community one more thing we could do well is celebrate our victories and successes, and especially credit those that have been involved whether as individuals or a community. So that our next generation contributors feel inspired and encourage to do more good work and share it with us.

Contributions from the community

We want to contribute

In a recent tweet and posts to various Java / JVM and developer mailing lists, I requested the community to come forward and share their contribution stories or those from others with our community. The purpose was two-fold, one to share it with the community and the other to write this post (which in turn is shared with the community). I was happy to see a handful of messages sent to me and the mailing lists by a number of community members. I’ll share some of these with you (in the order I have received them).

Sebastian Daschner:

I don’t know if that counts as contribution but I’ve hacked on the
OpenJDK compiler for fun several times. For example I added a new
thought up ‘maybe’ keyword which produces randomly executed code:
https://blog.sebastian-daschner.com/entries/maybe_keyword_in_java

Thomas Modeneis:

Thanks for writing, I like your initiative, its really good to show how people are doing and what they have been focusing on. Great idea.
From my part, I can tell about the DevoxxMA last month, I did a talk on the Hacker Space about the Adopt the OpenJDK and it was really great. We had about 30 or more attendees, it was in a open space so everyone that was going to any talk was passing and being grabbed to have a look about the topic, it was really challenging because I had no mic. but I managed to speak out loud and be listen, and I got great feedback after the session. I’m going to work over the weekend to upload the presentation and the recorded video and I will be posting here as soon as I have it done! 🙂

Martijn Verburg:

Good initiative.  So the major items I participated in were Date and Time and Lambdas Hackdays (reporting several bugs), submitted some warnings cleanups for OpenJDK.  Gave ~10 pages of feedback for jshell and generally tried to encourage people more capable than me to contribute :-).

Andrii Rodionov:

Olena Syrota and Oleg Tsal-Tsalko from Ukraine JUG: Contributing to JSR 367 test code-base (https://github.com/olegts/jsonb-spec), promoting ‘Adopt a JSR’ and JSON-B spec at JUG UA meetings (http://jug.ua/2015/04/json-binding/) and also at JavaDay Lviv conference (http://www.slideshare.net/olegtsaltsalko9/jsonb-spec).

Contributors

Contributors gathering together

As you have seen that from out of a community of 9+ million users, only a handful of them came forward to share their stories. While I can point you out to another list of contributors who have been paramount with their contributions to the Adopt OpenJDK GitBook, for example, take a look at the list of contributors and also the committers on the git-repo. They have not just contributed to the book but to Java and the OpenJDK community, especially those who have helped translate the book into multiple languages. And then there are a number of them who haven’t come forward to add their names to the list, even though they have made valuable contributions.
Super heroes together

From this I can say contributors can be like unsung heroes, either due their shy or low-profile nature or they just don’t get noticed by us. So it would only be fair to encourage them to come forward or share with the community about their contributions, however simple or small those may be. In addition to the above list I would like to also add a number of them (again apologies if I have missed out your name or not mentioned about you or all your contributions). These names are in no particular order but as they come to my mind as their contributions have been invaluable:

  • Dalibor Topic (OpenJDK Project Lead) & the OpenJDK team
  • Mario Torre & the RedHat OpenJDK team
  • Tori Wieldt (Java Community manager) and her team
  • Heather Vancura & the JCP team
  • NightHacking, vJUG and RebelLabs (and the great people behind them)
  • Nicolaas & the team at Cloudbees
  • Chris Newland (JitWatch developer)
  • Lucy Carey, Ellie & Mark Hazell (Devoxx UK & Voxxed)
  • Richard Kolb (JUG South Africa)
  • Daniel Bryant, Richard Warburton, Ben Evans, and a number of others from LJC
  • Members of SouJava (Otavio, Thomas, Bruno, and others)
  • Members of Bulgarian JUG (Ivan, Martin, Mitri) and neighbours
  • Oti, Ludovic & Patrick Reinhart
  • and a number of other contributors who for some reason I can’t remember…

I have named them for their contributions to the community by helping organise Hackdays during the week and weekends, workshops and hands-on sessions at conferences, giving lightening talks, speaking at conferences, allowing us to host our CI and build farm servers, travelling to different parts of the world holding the Java community flag, writing books, giving Java and advance-level training, giving feedback on new technologies and features, and innumerable other activities that support and push forward the Java / JVM platform.

How you can make a difference ? And why ?

Make a difference

You can make a difference by doing something as simple as clicking the like button (on Twitter, LinkedIn, Facebook, etc…) or responding to a message on a mailing list by expressing your opinion about something you see or read about –as to why you think about it that way or how it could be different.

The answer to the question “And why ?” is simple, because you are part of a community and ‘you care’ and want to share your knowledge and experience with others — just like the others above who have spared free moments of their valuable time for us.

Is it hard to do it ? Where to start ? What needs most attention ?

important-checklist The answer is its not hard to do it, if so many have done it, you can do it as well. Where to start and what can you do ? I have written a page on this topic. And its worth reading it before going any further.

There is a dynamic list of topics that is worth considering when thinking of contributing to OpenJDK and Java. But recently I have filtered this list down to a few topics (in order of precedence):

We need you!

With that I would like to close by saying:

i_need_you_duke3

Not just “I”, but we as a community need you.

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Introduction To JUnit Theories

Have you ever read a mathematical theory?

It typically reads something like this:

For all a, b > 0  the following is true: a+b > a and a+b > b

Just typically the statements are more difficult to understand.

There is something interesting about this kind of statement: It holds for EVERY element (or combination of elements) of a rather large (infinite in this case) set.

Compare that to the statement a typical test makes:


@Test
public void a_plus_b_is_greater_than_a_and_greater_than_b(){
int a = 2;
int b = 3;
assertTrue(a + b > a);
assertTrue(a + b > b);
}

This is just a statement about a single element of the large set we talked about. Not very impressive. Of course we can fix that somewhat by looping over the test (or using parameterized tests):


@Test
public void a_plus_b_is_greater_than_a_and_greater_than_b_multiple_values() {
List<Integer> values = Arrays.asList(1, 2, 300, 400000);
for (Integer a : values)
for (Integer b : values) {
assertTrue(a + b > a);
assertTrue(a + b > b);
}
}

Of course this still only tests a few values, but it also became pretty ugly. We are using 9 lines of code to  test what a mathematician writes in a single line! And the main point that this relation ship should hold for any value a,b is completely lost in translation.

But there is hope: JUnit Theories. Let’s see how the test looks like with that nifty tool:

import org.junit.experimental.theories.DataPoints;
import org.junit.experimental.theories.Theories;
import org.junit.experimental.theories.Theory;
import org.junit.runner.RunWith;

import static org.junit.Assert.assertTrue;

@RunWith(Theories.class)
public class AdditionWithTheoriesTest {

@DataPoints
public static int[] positiveIntegers() {
return new int[]{
1, 10, 1234567};
}

@Theory
public void a_plus_b_is_greater_than_a_and_greater_than_b(Integer a, Integer b) {
assertTrue(a + b > a);
assertTrue(a + b > b);
}
}

With JUnit Theories the test gets split in two separate parts: a method providing data points i.e. values to be used for tests, and the theory itself. The theory looks almost like a test, but it has a different annotation (@Theory) and it takes parameters. The theories in a class get executed with every possible combination of data points.

This means that if we have more then one theory about our test subject we only have to declare the data points once. So let’s add the following theory, which should be true for addition: a + b = b + a So we add the following theory to our class
@Theory public void addition_is_commutative(Integer a, Integer b) { assertTrue(a + b == b + a); }
This works like a charm and one can start to see that this actually saves some code as well, because we don’t duplicate the data points. But we only test with positive integers, while the commutative property should hold for all integers! Of course our first theory still only holds for positive numbers

There is a solution for this as well: Assume. With assume you can check precondition for your theory. If it isn’t true for a given parameter set, the theory gets skipped for that parameter set. So our test now looks like this:


@RunWith(Theories.class)
public class AdditionWithTheoriesTest {

@DataPoints
public static int[] integers() {
return new int[]{
-1, -10, -1234567,1, 10, 1234567};
}

@Theory
public void a_plus_b_is_greater_than_a_and_greater_than_b(Integer a, Integer b) {
Assume.assumeTrue(a >0 && b > 0 );
assertTrue(a + b > a);
assertTrue(a + b > b);
}

@Theory
public void addition_is_commutative(Integer a, Integer b) {
assertTrue(a + b == b + a);
}
}

This makes the tests nicely expressive.

The separation of test data from test/theory implementation can have another positive effect apart from brevity: You might start to think about you test data independent of the actual stuff to test.

Lets do just that. If you want to test a method that takes an integer argument, what integers would be likely to cause problems? This is my proposal:


@DataPoints
public static int[] integers() {
return new int[]{
0, -1, -10, -1234567,1, 10, 1234567, Integer.MAX_VALUE, Integer.MIN_VALUE};}

Which of course causes a test failure in our example. If you add a positive integer to Integer.MAX_VALUE you get an overflow! So we just learned that our theory in its current form is wrong! Yeah I know this is obvious, but have a look at the tests in your current project. Do all the tests that use Integers test with MIN_VALUE, MAX_VALUE, 0, a positive and a negative value? Yeah, thought so.

What about more complex objects? Strings? Dates? Collections? Or domain objects? With JUnit Theories you can setup test data generators once that create all the scenarios that are prone to create problems and then reuse those in all your tests using theories. It will make your tests more expressive and improve the probability of finding bugs.

Run, JUnit! Run!!!

JUnit together with JavaScript and SVN are some of the technologies that programmers often start using without even reading a single blog post let alone a book.  Maybe this is a good thing since they look simple enough and understandable so we can use them right away without any manuals, but this also means that they are also underused. In this article we will go through some features of JUnit that I consider very useful.

Parameterized tests 

Sometimes we need to run the same method or functionality with many different inputs and different expected results. One way to do this would be to create separate tests for each of the cases, or you

can use loop but that it would be harder to track down the origin of a possible test failure.

For example if we have the following value object representing rational numbers:


public class RationalNumber {

private final long numerator;
private final long denominator;

public RationalNumber(long numerator, long denominator) {
this.numerator = numerator;
this.denominator = denominator;
}

public long getNumerator() {
return numerator;
}

public long getDenominator() {
return denominator;
}

@Override
public String toString() {
return String.format("%d/%d", numerator, denominator);
}
}

And we have a service class called App with a method convert that divides the number to a rounded value of 5 decimal :

public class App {

/**
* THE Logic
*
* @param number some rational number
* @return BigDecimal rounded to 5 decimal points
*/
public static BigDecimal convert(RationalNumber number) {
BigDecimal numerator = new BigDecimal(number.getNumerator()).
setScale(5, RoundingMode.HALF_UP);

BigDecimal result = numerator.divide(
new BigDecimal(number.getDenominator()),
RoundingMode.HALF_UP);

return result;
}
}

And for the actual AppTest class we have

@RunWith(Parameterized.class)
public class AppTest {

private RationalNumber input;
private BigDecimal expected;

public AppTest(RationalNumber input, BigDecimal expected) {
this.input = input;
this.expected = expected;
}

@Parameterized.Parameters(name = "{index}: number[{0}]= {1}")
public static Collection<Object> data() {
return Arrays.asList(new Object[][]{
{new RationalNumber(1, 2), new BigDecimal("0.50000")},
{new RationalNumber(1, 1), new BigDecimal("1.00000")},
{new RationalNumber(1, 3), new BigDecimal("0.33333")},
{new RationalNumber(1, 5), new BigDecimal("0.20000")},
{new RationalNumber(10000, 3), new BigDecimal("3333.33333")}
});
}

@Test
public void testApp() {
//given the test data
//when
BigDecimal out = App.convert(input);
//then
Assert.assertThat(out, is(equalTo(expected)));
}

}

The Parametrized runner or  @RunWith(Parameterized.class)  enables the “parametrization” or in other words the injection of the collection of values annotated with  @Parameterized.Parameters into the Test constructor where each of the sublist is an parameter list.  This means that each of the RationalNumber objects in the data() method will be injected into the input variable and each of the BigDecimal values would be the expected value, so in our example we have 5 tests.

There is also an optional custom naming of the generated test added in the annotation, so “{index}: number[{0}]= {1}” will be replaced with the appropriate parameters defined in the data() method and the “{index}” placeholder will be the test case index, like in the following image

Running the parametrized tests in IntelliJ Idea 

JUnit rules

The simplest definition of JUnit rules is that they are in a sense an interceptors  and very similar to the Spring aspect oriented programming or Java EE interceptors API. Basically you can do useful things before and after the test execution.
OK so let’s start with some of the built in test rules. One of them is ExternalResource  where the idea is that we setup an external resource and after the teardown garteet the resource was freed up. A classic example of such test is a creation of file, so for that purpose we have a built in class TemporaryFolder but we can also create our own ones for other resources :


public class TheRuleTest {
@Rule
public TemporaryFolder folder = new TemporaryFolder();

@Test
public void someTest() throws IOException {
//given
final File tempFile = folder.newFile("thefile.txt");
//when
tempFile.setExecutable(true) ;
//then
assertThat(tempFile.canExecute(), is(true));
}
}

We could have done this in a @Before and @After blocks and use java temp files but it is easy to forget something and leave some of the files unclosed in some of the scenarios where a test fails.

For example there is also a Timeout rule for methods where if the the execution is not finished in given time limit the test will fail with a Timeout exception. For example to limit the running for 20 milliseconds :
   


@Rule
public MethodRule globalTimeout = new Timeout(20);


We can implement our own rules that can do a policy enforcement or various project specific changes. The only thing that needs to be done is for us to implement the TestRule interface.
A simple scenario to explain the behaviour is to add a rule that prints someting before and after test.


import org.junit.rules.TestRule;

import org.junit.runner.Description;
import org.junit.runners.model.Statement;

public class MyTestRule implements TestRule {

public class MyStatement extends Statement {

private final Statement statement;

public MyStatement(Statement statement) {
this.statement = statement;
}

@Override
public void evaluate() throws Throwable {
System.out.println("before statement" );
statement.evaluate();
System.out.println("after statement");
}

}

@Override
public Statement apply(Statement statement,
Description description) {

System.out.println("apply rule");
return new MyStatement(statement);
}

}


So now that we have our rule we can use it in tests, were the tests will just print out different values :


public class SomeTest {

@Rule
public MyTestRule folder = new MyTestRule();

@Test
public void testA() {
System.out.println("A");
}

@Test
public void testB() {
System.out.println("B");
}
}

When we run a test the following output will be created on the  console output :


apply rule
before statement
A
after statement
apply rule
before statement
B
after statement

From the built in one there is one called ExpectedException that can very useful when trying out testing errors. Additionally there is an option to chain the rules that can be useful in many scenarios.

To sum up

If you wanna say that Spock or TestNG or some library build on top of JUnit have more features than JUnit, than that is probably true.
But you know what? We don’t always have those on our class path and chances are that JUnit is there and already used all over the place. Than why not use it’s full potential ?

Useful links

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Using Matchers in Tests

Gone are the days when we were forced to write way too many assertion lines in our testing code. There is a new sheriff in town: assertThat and his deputy: the matchers. Well, not that new, but anyway I’d like to present to you shortly how matchers are used and after that an extension to matchers concept that I found to be very useful when developing unit tests for my code.

First of all I’ll present the basic use of the matchers. Of course you can have a complete presentation of hamcrest matchers capabilities directly from its authors: https://code.google.com/p/hamcrest/wiki/Tutorial.

Basically a matcher is an object that defines when two objects match. The first question usually is why wouldn’t you use equals? Well, sometimes you don’t want to match two objects on all their fields, just on some of them and if you work with legacy code you’ll find that the equals implementation is not present or is not as you would’ve expected. Another reason is the fact that using assertThat gives you a more consistent way of “asserting the assertions” and arguably a more readable code. So, for example, instead of writing:


int expected, actual;
assertEquals(expected, actual);

you will write

assertThat(expected, is(actual));

where “is” is the statically imported org.hamcrest.core.Is.is
Not that much of a difference… yet. But Hamcrest offers you a lot of very useful matchers:
  • For arrays and maps : hasItem, hasKey, hasValue
  • Numbers: closeTo – a way to specify equality with an error of margin, greaterThan, lessThan…
  • Objects: nullValue, sameInstance
Now we’re making progress… still the power of Hamcrest matchers is that you have the possibility to write your own matchers for your objects. You just have to extend BaseMatcher<T> class. Here is an example of a simple custom matcher:

public class OrderMatcher extends BaseMatcher<Order> {
private final Order expected;
private final StringBuilder errors = new StringBuilder();

private OrderMatcher(Order expected) {
this.expected = expected;
}

@Override
public boolean matches(Object item) {
if (!(item instanceof Order)) {
errors.append("received item is not of Order type");
return false;
}
Order actual = (Order) item;
if (actual.getQuantity() != (expected.getQuantity())) {
errors.append("received item had quantity ").append(actual.getQuantity()).append(". Expected ").append(expected.getQuantity());
return false;
}
return true;
}

@Override
public void describeTo(Description description) {
description.appendText(errors.toString());
}

@Factory
public static OrderMatcher isOrder(Order expected) {
return new OrderMatcher(expected);
}
}

This is a completely new league compared to the old assertion methods.

So this is in a nutshell the usage of the Hamcrest’s matchers.
But, when I started using it in real life, especially when working with legacy code, I realized that there is more to the story. Here are some issues that I’ve encountered when using matchers:
  1. Matcher construction can be very repetitive and boring. I needed a way to apply DRY principle to matcher code.
  2. I needed an unified way to access the matchers. The correct matcher should be chosen by the framework by default.
  3. I needed to compare objects that had reference to another objects which should have been compared with matchers (the object referencing can go as deep as you want)
  4. I needed to check a collection of objects  using matchers without iterating that collection (doable also with the array matchers… but I wanted more J)
  5. I needed to have a more flexible matcher. For example, for the same object I needed to check one set of fields, but in another case another one. The out-of-box solution is to have a matcher for each case. Didn’t like that.
I’ve overcome these issues using a matcher hierarchy that some conventions and which knew which matcher to apply and which field to compare or ignore. At the root of this hierarchy is the RootMatcher<T> that extends BaseMatcher<T>.

To deal with the #1 issue (repetitive code), the RootMatcher class contains the common code for all the matchers like methods for checking if the actual is null, or it has the same type with the expected object, or even if they are the same instance:


public boolean checkIdentityType(Object received) {
if (received == expected) {
return true;
}
if (received == null || expected == null) {
return false;
}
if (!checkType(received)){
return false;
}
return true;
}
private boolean checkType(Object received) {
if (checkType && !getClass(received).equals(getClass(expected))) {
error.append("Expected ").append(expected.getClass()).append(" Received : ").append(received.getClass());
return false;
}
return true;
}

This will simplify the way the matchers are written, I don’t have to take into account null or identity corner cases; it’s all been taken care of in the root class.

Also the expected object and the errors reside in the root class:

public abstract class RootMatcher extends BaseMatcher {
protected T expected;
protected StringBuilder error = new StringBuilder("[Matcher : " + this.getClass().getName() + "] ");

This allows you to get to the matches method implementation as soon as you extend RootMatcher and for errors you just put the messages in the StringBuilder; RootMatcher will handle sending them to the JUnit framework to be presented to the user.

For issue #2 (automatic matcher finding) the solution was in its factory method:

@Factory
public static Matcher is(Object expected) {
return getMatcher(expected, true);
}
public static RootMatcher getMatcher(Object expected, boolean checkType) {
try {
Class matcherClass = Class.forName(expected.getClass().getName() + "Matcher");
Constructor constructor = matcherClass.getConstructor(expected.getClass());
return (RootMatcher) constructor.newInstance(expected);
} catch (ClassNotFoundException | NoSuchMethodException | InvocationTargetException | InstantiationException | IllegalAccessException e) {
}
return (RootMatcher) new EqualMatcher(expected);
}

As you can see the factory method tries to find out which matcher should it return by using two conventions
  1. The matcher for an object has the name of the object + the string Matcher
  2. The matcher is in the same package as the object to be matched (recommendable to be in the same package, but in the test directory)

Using this strategy I succeeded in using a single matcher: RootMatcher.is that will  provide me the exact matcher that I need

And to solve the recursive nature of the object relations (issue #3), when checking object fields I used the method from RootManager to check equality that will use matchers:

public boolean checkEquality(Object expected, Object received) {
String result = checkEqualityAndReturnError(expected, received);
return result == null || result.trim().isEmpty();
}

public String checkEqualityAndReturnError(Object expected, Object received) {
if (isIgnoreObject(expected)) {
return null;
}
if (expected == null && received == null) {
return null;
}
if (expected == null || received == null) {
return "Expected or received is null and the other is not: expected " + expected + " received " + received;
}
RootMatcher matcher = getMatcher(expected);
boolean result = matcher.matches(received);
if (result) {
return null;
} else {
StringBuilder sb = new StringBuilder();
matcher.describeTo(sb);
return sb.toString();
}
}

But how about collections (issue #4). To solve that, all you have to do is to implement matchers for collections that extend RootMatcher.

So the only remaining issue is #5: to make the matcher more flexible, to be able to tell the matcher which field should it ignore and which should it take into account. For this I introduced the concept of “ignoreObject”. This is an object that the matcher will ignore when it will find a reference to it in a template (expected object). How does it work? First of all, in RootMatcher I offer methods to return the ignore object for any Java type:

private final static Map ignorable = new HashMap();

static {
ignorable.put(String.class, "%%%%IGNORE_ME%%%%");
ignorable.put(Integer.class, new Integer(Integer.MAX_VALUE - 1));
ignorable.put(Long.class, new Long(Long.MAX_VALUE - 1));
ignorable.put(Float.class, new Float(Float.MAX_VALUE - 1));
}

/**
* we will ignore mock objects in matchers
*/
private boolean isIgnoreObject(Object object) {
if (object == null) {
return false;
}
Object ignObject = ignorable.get(object.getClass());
if (ignObject != null) {
return ignObject.equals(object);
}
return Mockito.mockingDetails(object).isMock();
}

@SuppressWarnings("unchecked")
public static M getIgnoreObject(Class clazz) {
Object obj = ignorable.get(clazz);
if (obj != null) {
return (M) obj;
}
return (M) Mockito.mock(clazz);
}

@SuppressWarnings("unchecked")
public static M getIgnoreObject(Object obj) {
return (M) getIgnoreObject(obj.getClass());
}

As you can see, the ignored object will be the one which is mocked. But for classes that cannot be mocked (final classes) I provided some arbitrary fixed values that are very improbable to appear(this part can be improved J). For this to work the developer has to use the equality methods provided in RootMatcher: checkEqualityAndReturnError which will check for ignored objects. Using this strategy and the builder pattern which I presented last year (http://www.javaadvent.com/2012/12/using-builder-pattern-in-junit-tests.html) I can easily make my assertions for a complex object:

import static […]RootMatcher.is;
Order expected = OrderBuilder.anOrder().withQuantity(2)
.withTimestamp(RootManager.getIgnoredObject(Long.class))
.withDescription(“specific description”).build()
assertThat(order, is(expected);

As you can see I could easily specify that the timestamp should be ignored and which allowed me to use the same matcher with a completely different set of fields to be verified.

Indeed, this strategy requires pretty much preparation, making all the builders and the matchers. But if we want to have a code that is tested, if we want to make testing a job that has the primary focus on the test flow that should be covered, we need such a foundation and these tools that help us easily establish our precondition and build our expected state.

Of course that the implementation can be improved using annotation, but the core concepts still remain.

I hope this article helps you improve your testing style, and if there’s enough interest I will do my best to put the complete code on a public repository.
Thank you.

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Waiting for the right moment – in integration testing

When you have to test multi-threaded programs, there is always the need to wait until the system arrives at a particular state, at which point the test can verify that the proper state has been reached.

The usual way to do it is to insert a “probe” in the system which will signal a synchronization primitive (like a Semaphore) and the test waits until the semaphore gets signaled or a timeout passes. (Two things which you should never do – but are a frequent mistake – is to insert sleeps into your code – because they slow you down and are fragile – or to use the Object.wait method without looping around it – because you might get spurious wakeups which will result in spurious, hard to diagnose and very frustrating test failures).

This is all nice and good (although a little verbose – at least until the Java 8 lambdas arrive), but what if the second thread calls a third thread and doesn’t wait for it to finish, but in the test we want to wait for it? A concrete example would be: an integration test which verifies that a system composed out of a client which communicates trough a messaging middleware with a datagrid properly writes the data to the datagrid? Of course we will use a mock middleware and a mock datagrid, thus the startup/shutdown and processing will be very fast, but they would be still asynchronous (suppose that we can’t make it synchronous because the production one isn’t and the code is written such that it relies on this fact).

The situation is described visually in the sequence graph below: we have the test running on T0 and we would like for it to wait until the task on T3 has finished before it checks the state the system arrived to.

We can achieve this using a small modification to our execution framework (which probably is some kind of Executor). Given the following interface:


public interface ActivityCollector {
void before();
void after();
}

We would call before() at the moment a task is enqueued for execution and after() after it has executed (these will usually occur on different threads). If we now consider that before increments a counter and after decrements it, we can just wait for the counter to become zero (with proper synchronization) at which point we know that all the tasks were processed by our system. You can find an Executor which implements this here. In production you can of course use an implementation of the interface which does nothing, thus removing any performance overhead.

Now lets look at the interface which defines how we wait for the “processed” condition:

interface ActivityWatcher {
void await(long time, TimeUnit timeUnit);
}

Two personal design choices used here were: only provide a way to wait for a specific time and no longer (if the test takes too long that’s probably a performance regression one needs to take a look at) and to use unchecked exceptions to make testing code shorter.

A final feature would be to collect exceptions during the execution of the tasks and abort immediately if there is an exception somewhere rather than timing out. This means that we modify our interface as follows:

public interface ActivityCollector {
void before();
void after();
void collectException(Throwable t);
}

And the code wrapping the execution would be something like the following:

try {
command.run();
} catch (Throwable t) {
activityCollector.collectException(t);
throw t;
} finally {
activityCollector.after();
}

You can find an implementation of ActivityWatcher/ActivityCollector here (they are quite linked, thus the one class implementing them both). Happy testing!

A couple of caveats:

  • This requires some modification to your production code, so it might not be the best solution (for example you can try creating synchronous mocks of your subsystems and do testing that way).
  • This solution is not well suited for cases where Timers are involved because there will be times when “no tasks are waiting”, but in fact a task is waiting in a timer. You can work around this by using a custom timer which calls “before” when scheduling and “after” at the finish of the task.
  • The same issue can come up if you are using network communication for more authenticity (even if it is inside of the same process): there will be a moment when no tasks are scheduled because they are serialized in the OSs network buffer.
  • The ActivityCollector is a single point of synchronization. As such it might decrease performance and it might hide concurrency bugs. There are more complicated ways to implement it which avoids some of the synchronization overhead (like using a ConcurrentLinkedQueue), but you can’t eliminate it completely.

PS. This example is based on an IBM article I can’t seem to find (dear lazyweb: if somebody finds it, please leave a comment – before/after were called tick/tock in it) as well as work by my colleagues. My only role was to write it up and synthesize it.

Using Builder Pattern in JUnit tests

This is not intended to be a heavily technical post. The goal of this post is to give you some guidelines to make your JUnit testing life more easy, to enable you to write complex scenarios for tests in minutes with the bonus of having extremely readable tests.

There are two major parts in a Unit tests that require writing a lot of bootstrap code:

  • the setup part: constructing your initial state requires building the initial objects that will be fed to your SUT (system under test) 
  • the assertion part: constructing the desired image of your output objects and making assertions only on the needed data.

In order to reduce the complexity of building objects for tests I suggest using the Builder pattern in the following interpretation:

Here is the domain object:


public class Employee {
private int id;
private String name;
private Department department;

//setters, getters, hashCode, equals, toString methods

The builder for this domain object will look like this:


public class EmployeeBuilder {
private Employee employee;

public EmployeeBuilder() {
employee = new Employee();
}

public static EmployeeBuilder defaultValues() {
return new EmployeeBuilder();
}

public static EmployeeBuilder clone(Employee toClone) {
EmployeeBuilder builder = defaultValues();
builder.setId(toClone.getId());
builder.setName(toClone.getName());
builder.setDepartment(toClone.getDepartment());
return builder;
}

public static EmployeeBuilder random() {
EmployeeBuilder builder = defaultValues();
builder.setId(getRandomInteger(0, 1000));
builder.setName(getRandomString(20));
builder.setDepartment(Department.values()[getRandomInteger(0, Department.values().length - 1)]);
return builder;
}

public EmployeeBuilder setId(int id) {
employee.setId(id);
return this;
}

public EmployeeBuilder setName(String name) {
employee.setName(name);
return this;
}

public EmployeeBuilder setDepartment(Department dept) {
employee.setDepartment(dept);
return this;
}

public Employee build() {
return employee;
}
}

As you can see we have some factory methods:


public static EmployeeBuilder defaultValues()
public static EmployeeBuilder clone(Employee toClone)
public static EmployeeBuilder random()

These methods return different builders:

  • defaultValues : some hardcoded values for each fields ( or the Java defaults – current implementation)
  • clone : will take all the values from the initial object, and give you the possibility to change just some
  • random : will generate random values for each field. This is very useful when you have a lot of fields that you don’t specifically need in your test, but you need them to be initialized. getRandom* methods are defined statically in another class.

 You can add other methods that will initialized your builder accordingly to your needs.

Also the builder can handle building some objects that are not so easily constructed and changed. For example let’s change a little bit the Employee object and make it immutable:


public class Employee {
private final int id;
private final String name;
private final Department department;
...
}

Now we lost the possibility to change the fields as we wish. But using the builder in the following form we can regain this possibility when constructing the object:


public class ImmutableEmployeeBuilder {
private int id;
private String name;
private Department department;

public ImmutableEmployeeBuilder() {
}

public static ImmutableEmployeeBuilder defaultValues() {
return new ImmutableEmployeeBuilder();
}

public static ImmutableEmployeeBuilder clone(Employee toClone) {
ImmutableEmployeeBuilder builder = defaultValues();
builder.setId(toClone.getId());
builder.setName(toClone.getName());
builder.setDepartment(toClone.getDepartment());
return builder;
}

public static ImmutableEmployeeBuilder random() {
ImmutableEmployeeBuilder builder = defaultValues();
builder.setId(getRandomInteger(0, 1000));
builder.setName(getRandomString(20));
builder.setDepartment(Department.values()[getRandomInteger(0, Department.values().length - 1)]);
return builder;
}

public ImmutableEmployeeBuilder setId(int id) {
this.id = id;
return this;
}

public ImmutableEmployeeBuilder setName(String name) {
this.name = name;
return this;
}

public ImmutableEmployeeBuilder setDepartment(Department dept) {
this.department = dept;
return this;
}

public ImmutableEmployee build() {
return new ImmutableEmployee(id, name, department);
}
}

This is very useful when we have hard to construct objects, or we need to change fields that are final.

An here its the final result:

Without builders:


@Test
public void changeRoleTestWithoutBuilders() {
// building the initial state
Employee employee = new Employee();
employee.setId(1);
employee.setDepartment(Department.DEVELOPEMENT);
employee.setName("John Johnny");

// testing the SUT
EmployeeManager employeeManager = new EmployeeManager();
employeeManager.changeRole(employee, Department.MANAGEMENT);

// building the expectations
Employee expectedEmployee = new Employee();
expectedEmployee.setId(employee.getId());
expectedEmployee.setDepartment(Department.MANAGEMENT);
expectedEmployee.setName(employee.getName());

// assertions
assertThat(employee, is(expectedEmployee));
}

With builders:


@Test
public void changeRoleTestWithBuilders() {
// building the initial state
Employee employee = EmployeeBuilder.defaultValues().setId(1).setName("John Johnny").setDepartment(Department.DEVELOPEMENT).build();

// building the expectations
Employee expectedEmployee = EmployeeBuilder.clone(employee).setDepartment(Department.MANAGEMENT).build();

// testing the SUT
EmployeeManager employeeManager = new EmployeeManager();
employeeManager.changeRole(employee, Department.MANAGEMENT);

// assertions
assertThat(employee, is(expectedEmployee));
}

As you can see, the size of the test is much smaller, and the construction of objects became much simpler (and nicer if you have a better code format). The difference is greater if you have a more complex domain object (which is more likely in real-life applications and especially in legacy code).

Have fun!

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazs to contribute!