Decorator Design Pattern using lambdas

With the advent of lambdas in Java we now have a new tool to better design our code. Of course the first step is using streams, method references and other neat features introduced in Java 8.

Going forward I think the next step is to revisit the well established Design Patterns and see them through the functional programming lenses. For this purpose I’ll take the Decorator Pattern and implement it using lambdas.

We’ll take an easy and delicious example of the Decorator Pattern: adding toppings to pizza. Here is the standard implementation as suggested by GoF:

First we have the interface that defines our component:

public interface Pizza {
    String bakePizza();
}

We have a concrete component:

public class BasicPizza implements Pizza {
    @Override
    public String bakePizza() {
        return "Basic Pizza";
    }
}

We decide that we have to decorate our component in different ways. We go with Decorator Pattern. This is the abstract decorator:

public abstract class PizzaDecorator implements Pizza {
    private final Pizza pizza;
    
    protected PizzaDecorator(Pizza pizza) {
        this.pizza = pizza;
    }

    @Override
    public String bakePizza() {
        return pizza.bakePizza();
    }
}

we provide some concrete decorators for the component:

public class ChickenTikkaPizza extends PizzaDecorator {
    protected ChickenTikkaPizza(Pizza pizza) {
        super(pizza);
    }

    @Override
    public String bakePizza() {
        return super.bakePizza() + " with chicken topping";
    }
}

public class ProsciuttoPizza extends PizzaDecorator {

    protected ProsciuttoPizza(Pizza pizza) {
        super(pizza);
    }

    @Override
    public String bakePizza() {
        return super.bakePizza() + " with prosciutto";
    }
}

and this is the way to use the new structure:

Pizza pizza = new ChickenTikkaPizza(new BasicPizza());
String finishedPizza = pizza.bakePizza();   //Basic Pizza with chicken topping

pizza = new ChickenTikkaPizza(new ProsciuttoPizza(new BasicPizza()));
finishedPizza  = pizza.bakePizza();  //Basic Pizza with prosciutto with chicken topping

we can see that this can get very messy, and it did get very messy if we think about how we handle buffered readers in java:

new DataInputStream(new BufferedInputStream(new FileInputStream(new File("myfile.txt"))))

of course, you can split that in multiple lines, but that won’t solve the messiness, it will just spread it.
Now lets see how we can do the same thing using lambdas.
We start with the same basic component objects:

public interface Pizza {
    String bakePizza();
}

public class BasicPizza implements Pizza {
    @Override
    public String bakePizza() {
        return "Basic Pizza";
    }
}

But now instead of declaring an abstract class that will provide the template for decorations, we will create the decorator that asks the user for functions that will decorate the component.

public class PizzaDecorator {
    private final Function<Pizza, Pizza> toppings;

    private PizzaDecorator(Function<Pizza, Pizza>... desiredToppings) {
        this.toppings = Stream.of(desiredToppings)
                .reduce(Function.identity(), Function::andThen);

    }

    
    public static String bakePizza(Pizza pizza, Function<Pizza, Pizza>... desiredToppings) {
        return new PizzaDecorator(desiredToppings).bakePizza(pizza);
    }

private String bakePizza(Pizza pizza) {
    return this.toppings.apply(pizza).bakePizza();
}

}

There is this line that constructs the chain of decorations to be applied:

Stream.of(desiredToppings).reduce(identity(), Function::andThen);

This line of code will take your decorations (which are of Function type) and chain them using andThen. This is the same as

(currentToppings, nextTopping) -> currentToppings.andThen(nextTopping)

and it sure that the functions are called subsequently in the order you provided.
Also Function.identity() is translated to elem -> elem lambda expression.

Ok, now where we’ll we define our decorations? You can add them as static methods in PizzaDecorator or even in the interface:

public interface Pizza {
    String bakePizza();

    static Pizza withChickenTikka(Pizza pizza) {
        return new Pizza() {
            @Override
            public String bakePizza() {
                return pizza.bakePizza() + " with chicken";
            }
        };
    }

    static Pizza withProsciutto(Pizza pizza) {
        return new Pizza() {
            @Override
            public String bakePizza() {
                return pizza.bakePizza() + " with prosciutto";
            }
        };
    }
}

And now, this is how this pattern gets to be used:

String finishedPizza = PizzaDecorator.bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

//And if you static import PizzaDecorator.bakePizza:

String finishedPizza  = bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

As you can see, the code got more clear and more concise, and we didn’t use inheritance to build our decorators.

This is just one of the many design pattern that can be improved using lambdas. There are more features that can be used to improve the rest of them like using partial application (currying) to implement Adapter Pattern.

I hope I got you thinking about adopting a more functional programming approach to your development style.

UPDATE: Here you can find a video walkthrough of this article, created by my friends from Webucator:

If you want to see more of their tutorials you can visit Webucator site

Bibliography:

The decorator example was inspired by Gang of Four – Decorate with Decorator Design Pattern article

The refactoring method was inspired by the following Devoxx 2015 talks (which I recommend watching as they treat the subject at large):
Design Pattern Reloaded by Remi Forax
Design Patterns in the Light of Lambda Expressions by Venkat Subramaniam

 

Java in 2015 – Major happenings

2015 was the year where Java the language, platform, ecosystem and community continue to dominate the software landscape, with only Javascript having a similar sized impact on the industry. In case you missed the highlights of 2015, here’s some of the major happenings that occurred.

Java 20 years old and still not dead yet!

Java turned 20 this year and swept back to the top of the Tiobe index in December 2015. Although the Tiobe index is hardly a 100% peer reviewed scientific methodology, it is seen as a pretty strong barometer for the health of a language/platform. So what the heck happened to boost Java so dramatically again?

Firstly, the release of Java 8 the previous year was adopted by mainstream Java enterprise shops. The additional functional capabilities of Lambdas combined with the new Streams and Collections framework breathed a new lease of life into the language. Although Java 8 is not as rich in its feature set as say Scala or Python it is seen as the steady workhorse that now has at least some feature parity with more aggressive languages. Enterprises love a stable platform and it’s unlikely that Java will be disappearing any time soon.

Secondly, Java has become a strong platform to use for infrastructure platforms/frameworks. Many popular NoSQL, datagrid solutions such as Apache Cassandra, Hazelcast are written in Java, again due to its stability and strong threading and networking support. CI tools such as Jenkins are widely adopted and of course business productivity tools such as Atlassian’s JIRA are again Java based.

Oracle guts its Java evangelism team

Oracle fired much of its Java evangelism team just before JavaOne which wasn’t the greatest PR move by the stewards of Java. Over the subsequent months it became clearer that this wasn’t a step by Oracle to reduce its engineering efforts into Java but there were nervous times for much of the community as they feared the worst. A salient reminder that big corporations don’t always get their left hand talking to their right!

Java 9 delay announced

In the “We’re not really surprised” bucket came the announcement the Java 9 will be delayed until March 2017 in order to ensure that the new modularisation system will not break the millions of Java applications running out there today.

Although the technical work of Jigsaw is progressing nicely, the entire ecosystem will need to test on the new system. The Quality group in OpenJDK is leading this effort. I highly recommend you contact them to be part of the early access and feedback loop.

OpenJDK supports further mobile platforms

The creation of the OpenJDK mobile project came as a surprise to many and although it doesn’t represent a change in Oracle’s business direction it was a wlecome release of code to enable Java on ARM, Android and iOS platforms. There’s much technical work to do but it will be interesting to watch if the software community at large picks up on this new support and tries Java out as a language for the iOS and Android platforms in 2016 and beyond. There is a possibility that OpenFX (JavaFX) combined with Java mobile on iOS or Android may entice a slew of developers to this ‘new’ platform.

Was I right about 2015?

It’s always fun to look at past predictions, let’s see how I did!

  1. I expected 2015 to be a little bit quieter. Well I clearly got that wrong! Despite no major releases for ME, SE or EE, the excitement of celebrating 20 years of Java and a surge of new developers using Java 8 meant 2015 was busier than ever.
  2. Embracing Javascript for the front end. This trend continues and stacks such as JHipster show the new love affair that Java developers have with Javascript.
  3. Devops toolchains to the fore. Docker continues to steamroll ahead in terms of popularity and Java developers are especially starting to use Docker in test environments to avoid polluting environments with variations in Java runtimes, web servers, data stores etc.
  4. IoT and Java to be a thing. Nope, not yet! Perhaps in 2016 with the new Mobile Java project in OpenJDK and further refinement of Java ME, we may start to see serious inroads.

I’m not going to make any predictions for 2016 as I clearly need to stick to my day job :-)

One final important note. Project Jigsaw is the modularisation story for Java 9 that will massively impact tool vendors and day to day developers alike. The community at large needs your help to help test out early builds of Java 9 and to help OpenJDK developers and tool vendors ensure that IDEs, build tools and applications are ready for this important change. You can join us in the Adoption Group at OpenJDK. I hope everyone has a great holiday break – I look forward to seeing the Twitter feeds and the GitHub commits flying around in 2016 :-).

Cheers,
Martijn (CEO – jClarity, Java Champion & Diabolical Developer)

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Adopt OpenJDK & Java community: how can you help Java !

Introduction

I want to take the opportunity to show what we have been doing in last year and also what we have done so far as members of the community. Unlike other years I have decided to keep this post less technical compare to the past years and compared to the other posts on Java Advent this year.

InTheBeginning

This year marks the fourth year since the first OpenJDK hackday was held in London (supported by LJC and its members) and also when the Adopt OpenJDK program was started. Four years is a small number on the face of 20 years of Java, same goes to the size of the Adopt OpenJDK community which forms a small part of the Java community (9+ million users). Although the post is non-technical in nature, the message herein is fairly important for the future growth and progress of our community and the next generation developers.

Creations of the community

Creations from the community

Over the many months a number of members of our community contributed and passed on their good work to us. In no specific order I have enlisted these picking them from memory. I know there are more to name and you can help us by sharing those with us (we will enlist them here).  So here are some of those that we can talk about and be proud of, and thank those who were involved:

  • Getting Started page – created to enabled two way communication with the members of the community, these include a mailing list, an IRC channel, a weekly newsletter, a twitter handle, among other social media channels and collaboration tools.
  • Adopt OpenJDK project: jitwatch – a great tool created by Chris Newland, its one of its kind, ever growing with features and helping developers fine-tune the performance of your Java/JVM applications running on the JVM.
  • Adopt OpenJDK: GSK – a community effort gathering knowledge and experience from hackday attendees and OpenJDK developers on how to go about with OpenJDK from building it to creating your own version of the JDK. Many JUG members have been involved in the process, and this is now a e-book available in many languages (5 languages + 2 to 3 more languages in progress).
  • Adopt OpenJDK vagrant scripts – a collection of vagrant scripts initially created by John Patrick from the LJC, later improved by the community members by adding more scripts and refactoring existing ones. Theses scripts help build OpenJDK projects in a virtualised container i.e. VirtualBox, making building, and testing OpenJDK and also running and testing Java/JVM applications much easier, reliable and in an isolated environment.
  • Adopt OpenJDK docker scripts – a collection of docker scripts created with the help of the community, this is now also receiving contributions from a number of members like Richard Kolb (SA JUG). Just like the vagrant scripts mentioned above, the docker scripts have similar goals, and need your DevOps foo!
  • Adopt OpenJDK project: mjprof – mjprof is a Monadic jstack analysis tool set. It is a fancy way to say it analyzes jstack output using a series of simple composable building blocks (monads). Many thanks to Haim Yadid for donating it to the community.
  • Adopt OpenJDK project: jcountdown – built by the community that mimics the spirit of ie6countdown.net. That is, to encourage users to move to the latest and greatest Java! Many thanks to all those involved, you can already see from the commit history.
  • Adopt OpenJDK CloudBees Build Farm – thanks to the folks at CloudBees for helping us host our build farm on their CI/CD servers. This one was initially started by Martijn Verburg and later with the help of a number of JUG members have come to the point that major Java projects are built against different versions of the JDK. These projects include building the JDKs themselves (versions 1.7, 1.8, 1.9, Jigsaw and Shenandoah). This project has also helped support the Testing Java Early project and Quality  Outreach program.

These are just a handful of such creations and contributions from the members of the community, some of these projects would certainly need help from you. As a community one more thing we could do well is celebrate our victories and successes, and especially credit those that have been involved whether as individuals or a community. So that our next generation contributors feel inspired and encourage to do more good work and share it with us.

Contributions from the community

We want to contribute

In a recent tweet and posts to various Java / JVM and developer mailing lists, I requested the community to come forward and share their contribution stories or those from others with our community. The purpose was two-fold, one to share it with the community and the other to write this post (which in turn is shared with the community). I was happy to see a handful of messages sent to me and the mailing lists by a number of community members. I’ll share some of these with you (in the order I have received them).

Sebastian Daschner:

I don’t know if that counts as contribution but I’ve hacked on the
OpenJDK compiler for fun several times. For example I added a new
thought up ‘maybe’ keyword which produces randomly executed code:
https://blog.sebastian-daschner.com/entries/maybe_keyword_in_java

Thomas Modeneis:

Thanks for writing, I like your initiative, its really good to show how people are doing and what they have been focusing on. Great idea.
From my part, I can tell about the DevoxxMA last month, I did a talk on the Hacker Space about the Adopt the OpenJDK and it was really great. We had about 30 or more attendees, it was in a open space so everyone that was going to any talk was passing and being grabbed to have a look about the topic, it was really challenging because I had no mic. but I managed to speak out loud and be listen, and I got great feedback after the session. I’m going to work over the weekend to upload the presentation and the recorded video and I will be posting here as soon as I have it done! :)

Martijn Verburg:

Good initiative.  So the major items I participated in were Date and Time and Lambdas Hackdays (reporting several bugs), submitted some warnings cleanups for OpenJDK.  Gave ~10 pages of feedback for jshell and generally tried to encourage people more capable than me to contribute :-).

Andrii Rodionov:

Olena Syrota and Oleg Tsal-Tsalko from Ukraine JUG: Contributing to JSR 367 test code-base (https://github.com/olegts/jsonb-spec), promoting ‘Adopt a JSR’ and JSON-B spec at JUG UA meetings (http://jug.ua/2015/04/json-binding/) and also at JavaDay Lviv conference (http://www.slideshare.net/olegtsaltsalko9/jsonb-spec).

Contributors

Contributors gathering together

As you have seen that from out of a community of 9+ million users, only a handful of them came forward to share their stories. While I can point you out to another list of contributors who have been paramount with their contributions to the Adopt OpenJDK GitBook, for example, take a look at the list of contributors and also the committers on the git-repo. They have not just contributed to the book but to Java and the OpenJDK community, especially those who have helped translate the book into multiple languages. And then there are a number of them who haven’t come forward to add their names to the list, even though they have made valuable contributions.
Super heroes together

From this I can say contributors can be like unsung heroes, either due their shy or low-profile nature or they just don’t get noticed by us. So it would only be fair to encourage them to come forward or share with the community about their contributions, however simple or small those may be. In addition to the above list I would like to also add a number of them (again apologies if I have missed out your name or not mentioned about you or all your contributions). These names are in no particular order but as they come to my mind as their contributions have been invaluable:

  • Dalibor Topic (OpenJDK Project Lead) & the OpenJDK team
  • Mario Torre & the RedHat OpenJDK team
  • Tori Wieldt (Java Community manager) and her team
  • Heather Vancura & the JCP team
  • NightHacking, vJUG and RebelLabs (and the great people behind them)
  • Nicolaas & the team at Cloudbees
  • Chris Newland (JitWatch developer)
  • Lucy Carey, Ellie & Mark Hazell (Devoxx UK & Voxxed)
  • Richard Kolb (JUG South Africa)
  • Daniel Bryant, Richard Warburton, Ben Evans, and a number of others from LJC
  • Members of SouJava (Otavio, Thomas, Bruno, and others)
  • Members of Bulgarian JUG (Ivan, Martin, Mitri) and neighbours
  • Oti, Ludovic & Patrick Reinhart
  • and a number of other contributors who for some reason I can’t remember…

I have named them for their contributions to the community by helping organise Hackdays during the week and weekends, workshops and hands-on sessions at conferences, giving lightening talks, speaking at conferences, allowing us to host our CI and build farm servers, travelling to different parts of the world holding the Java community flag, writing books, giving Java and advance-level training, giving feedback on new technologies and features, and innumerable other activities that support and push forward the Java / JVM platform.

How you can make a difference ? And why ?

Make a difference

You can make a difference by doing something as simple as clicking the like button (on Twitter, LinkedIn, Facebook, etc…) or responding to a message on a mailing list by expressing your opinion about something you see or read about –as to why you think about it that way or how it could be different.

The answer to the question “And why ?” is simple, because you are part of a community and ‘you care’ and want to share your knowledge and experience with others — just like the others above who have spared free moments of their valuable time for us.

Is it hard to do it ? Where to start ? What needs most attention ?

important-checklist The answer is its not hard to do it, if so many have done it, you can do it as well. Where to start and what can you do ? I have written a page on this topic. And its worth reading it before going any further.

There is a dynamic list of topics that is worth considering when thinking of contributing to OpenJDK and Java. But recently I have filtered this list down to a few topics (in order of precedence):

We need you!

With that I would like to close by saying:

i_need_you_duke3

Not just “I”, but we as a community need you.

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Effective UI tests with Selenide

Waiting for miracles

Christmas is a time for miracles. On the eve of the new year we all build plans for the next. And we hope that all problems will leave in the ending year, and a miracle happens in the coming year.

Every Java developer dreams about a miracle that lets him become The Most Effective Java Developer in the world.

I want to show you such a miracle.

It’s called automated tests!

Ugh, tests?

Yes. You will not become a real master thanks to micro/pico/nano services. You will become a real master thanks to discipline. Discipline claiming that developer only then reports jobs as donewhen code and tests are written and run.

But, isn’t testing boring?

Oh no, believe me! Writing of fast and stable automated tests is a great challenge for smartest heads. And it can be very fun and interesting. You only need to use right tools.

The right tool for writing UI tests is:

Selenide

Selenide is an open-source library for writing concise and stable UI tests.

Selenide is an ideal choice for software developers because it has a very low learning curve. Thus, you don’t need to bother with browser details, all these typical ajax and time issues that eat most of QA automation engineers’ time.

Let’s look at a simplest Selenide test:

public class GoogleTest {
  @Test
  public void user_can_search_everything_in_google() {
    open("http://google.com/ncr");
    $(By.name("q")).val("selenide").pressEnter();

    $$("#ires .g").shouldHave(size(10));

    $("#ires .g").shouldBe(visible).shouldHave(
        text("Selenide: concise UI tests in Java"),
        text("selenide.org"));
  }
}

Let’s look closer what happens here.

  • You open a browser with just one command open(url)
  • You find an element on a page with command $.
    You can find element by name, ID, CSS selector, attributes, xpath and even by text.
  • You manipulate the element: enter some text with val() and press enter with (surprise-surprise!) pressEnter().
  • You check the results: find all found results with $$ (it returns a collection of all matched elements). You check the size and content of the collection.

Isn’t this test easy to read? Isn’t this test easy to write?

I believe it is.

Deeper into details

Ajax/timing problems

Nowdays web applications are dynamic. Every single piece of application can be rendered/changed dynamically at any moment. This creates a lot of problems for automated tests. Test that is green today can suddenly become red at any moment, just because browser executed some javascript a little bit longer than usual.

It’s a real pain in the ajjaxx.

Quite unbelievably, but Selenide resolves most of the these problems in a very simple way.

Simply said, every Selenide method waits a little bit if needed. People call it “smart waiting”.

When you write

$("#menu").shouldHave(text("Hello"));

Selenide checks if the element exists and contains text “Hello”.

If not yet, Selenide assumes that probably the element will be updated dynamically soon, and waits a little bit until it happens. The default timeout is 4 seconds, which is typically enough for most web applications. And of course, it’s configurable.

Rich set of matchers

You can check pretty much everything with Selenide. Using “smart waiting” mechanism mentioned above.

For example, you can check if element exists. If not yet, Selenide will wait up to 4 seconds.

$(".loading_progress").shouldBe(visible);

You can even check that element does not exist. If it still exists, Selenide will wait up to 4 seconds until it disappears.

$(By.name("gender")).should(disappear);

And you can use fluent API and chain methods to make your tests really concise:

$("#menu")
  .shouldHave(text("Hello"), text("John!"))
  .shouldBe(enabled, selected);

Collections

Selenide allows you to work with collections, thus checking a lot of elements with one line of code.

For example, you can check that there are exactly N elements on a page:

$$(".error").shouldHave(size(3));

You can find subset of collections:

$$("#employees tbody tr")
  .filter(visible)
  .shouldHave(size(4));

You can check texts of elements. In most cases, it’s sufficient to check the whole table or table row:

$$("#employees tbody tr").shouldHave(
  texts(
      "John Belushi",
      "Bruce Willis",
      "John Malkovich"
  )
);

Upload/download files

It’s pretty easy to upload a file with Selenide:

$("#cv").uploadFile(new File("cv.doc"));

You can even upload multiple files at once:

$("#cv").uploadFile(
  new File("cv1.doc"),
  new File("cv2.doc"),
  new File("cv3.doc")
);

And it’s unbelievably simple to download a file:

File pdf = $(".btn#cv").download();

Testing “highly dynamic” web applications

Some web frameworks (e.g. GWT) generate HTML that is absolutely unreadable. Elements do not have constant IDs or names.

It’s a real pain in the xpathh.

Selenide suggests to resolve this problem by searching elements by text.

import static com.codeborne.selenide.Selectors.*;

$(byText("Hello, Devoxx!"))     // find by the whole text
   .shouldBe(visible);

$(withText("oxx"))              // find by substring
   .shouldHave(text("Hello, Devoxx!"));

Searching by text is not bad idea at all. In fact, I like it because it emulates behaviour of real user. Real user doesn’t find buttons by ID or XPATH – he finds by text (or, well, color).

Another useful set of Selenide methods allows you to navigate between parents and children.

$("td").parent()
$("td").closest("tr")
$(".btn").closest(".modal")
$("div").find(By.name("q"))

For example, you can find a table cell by text, then by its closest tr descendant and find a “Save” button inside this table row:

$("table#employees")
  .find(byText("Joshua"))
  .closest("tr.employee")
  .find(byValue("Save"))
  .click();

… And many other functions

Selenide has many more functions, like:

$("div").scrollTo();
$("div").innerText();
$("div").innerHtml();
$("div").exists();
$("select").isImage();
$("select").getSelectedText();
$("select").getSelectedValue();
$("div").doubleClick();
$("div").contextClick();
$("div").hover();
$("div").dragAndDrop()
zoom(2.5);
...

but the good news is that you don’t need to remember all this stuff. Just put $, put dot and choose from available options suggested by your IDE.

Use the power of IDE! Concentrate on business logic.

Power of IDE

Make the world better

I believe the World will get better when all developers start writing automated tests for their code. When developers will get up at 17:00 and go to their children without fearing that they broke something with last changes.

Let’s make the world better by writing automated tests!

Deliver working software.

Andrei Solntsev

selenide.org

Java regular expression library benchmarks – 2015

While trying to get Java to #1 in the regexdna challenge for The Computer Language Benchmarks Game I was researching the performance of regular expression libraries for Java. The most recent website I could find was tusker.org from 2010. Hence I decided to redo the tests using the Java Microbenchmarking Harness and publish the results.
TL;DR: regular expressions are good for ad-hoc querying but if you have something performance sensitive, you should hand-code your solution (this doesn’t mean that you have to start from absolute zero – the Google Guava library has for example some nice utilities which can help in writing readable but also performant code).

And now, for some charts summarizing the performance – the test was run on an 64bit Ubuntu 15.10 machine with OpenJDK 1.8.0_66:

Small texts Large texts
Linear scale Java regular expression libraries results on small texts Java regular expression libraries results on a large text
Logarithmic scale Java regular expression libraries results on small texts - logarithmic scale Java regular expression libraries results on a large text - logarithmic scale

Observations:

  • there is no “standard” for regular expressions, so different libraries can behave differently when given a particular regex and a particular string to match against – ie. one might say that it matches but the other might say that it doesn’t. For example, even though I used a very reduced set of testcases (5 regexes checked against 6 strings), only two of the libraries managed to match / not match them all correctly (one of them being java.util.Pattern).

  • it probably takes more than one try to get your regex right (tools like regexpal or The Regex Coach are very useful for experimenting)

  • the performance of a regex is hard to predict (and sometimes it can have exponential complexity based on the input length) – because of this you need to think twice if you accept a regular expression from arbitrary users on the Internet (like a search engine which would allow search by regular expressions for example)

  • none of the libraries seems to be in active development any more (in fact quite a few from the original list on tusker.org are now unavailable) and many of them are slower than the built-in j.u.Pattern, so if you use regexes that should probably be the first choice.

  • that said, the performance of both the hardware and JVM has been considerable, so if you are using one of these libraries, it is running generally an order of magnitude faster than it was five years ago. So there is no need to quickly replace working code (unless your profiler says that it is a problem :-))

  • watch out for calls to String.split in loops. While it has some optimization for particular cases (such as one-char regexes), you should almost always:

    • see if you can use something like Splitter from Google Guava
    • if you need a regular expression, at least pre-compile it outside of the loop
  • the two surprises were dk.brics.automaton which outperformed everything else by several orders of magnitude, however:
    • the last release was in 2011 and seems to be more an academic project
    • it doesn’t support the same syntax as java.util.Pattern (but doesn’t give you a warning if you try to use a j.u.Pattern – it just won’t match the strings you think it should)
    • doesn’t have an API as comfortable as j.u.Pattern (for example it’s missing replacements)
  • the other surprise was kmy.regex.util.Regex, which – although not updated since 2000 – outperformed java.util.Pattern and passed all the tests (of which there weren’t admittedly many).

The complete list of libraries used:

Library name and version (release year) Available in Maven Central License Average ops/second Average ops/second (large text) Passing tests
j.util.Pattern 1.8 (2015) no (comes with JRE) JRE license 19 689 22 144 5 out of 5
dk.brics.automaton.Automaton 1.11-8 (2011) yes BSD 2 600 225 115 374 276 2 out of 5
org.apache.regexp 1.4 (2005) yes Apache (?) 6 738 16 895 4 out of 5
com.stevesoft.pat.Regex 1.5.3 (2009) yes LGPL v3 4 191 859 4 out of 5
net.sourceforge.jregex 1.2_01 (2002) yes BSD 57 811 3 573 4 out of 5
kmy.regex.util.Regex 0.1.2 (2000) no Artistic License 217 803 38 184 5 out of 5
org.apache.oro.text.regex.Perl5Matcher 2.0.8 (2003) yes Apache 2.0 31 906 2383 4 out of 5
gnu.regexp.RE 1.1.4 (2005?) yes GPL (?) 11 848 1 509 4 out of 5
com.basistech.tclre.RePattern 0.13.6 (2015) yes Apache 2.0 11 598 43 3 out of 5
com.karneim.util.collection.regex.Pattern 1.1.1 (2005?) yes ? 2 out of 5
org.apache.xerces.impl.xpath.regex.RegularExpression 2.11.0 (2014) yes Apache 2.0 4 out of 5
com.ibm.regex.RegularExpression 1.0.2 (no longer available) no ?
RegularExpression.RE 1.1 (no longer available) no ?
gnu.rex.Rex ? (no longer available) no ?
monq.jfa.Regexp 1.1.1 (no longer available) no ?
com.ibm.icu.text.UnicodeSet (ICU4J) 56.1 (2015) yes ICU License

If you want to re-run the tests, check out the source code and run it as follows:

# we need to skip tests since almost all libraries fail a test or an other
mvn -Dmaven.test.skip=true clean package
# run the benchmarks
java -cp lib/jint.jar:target/benchmarks.jar net.greypanther.javaadvent.regex.RegexBenchmarks

Find the complete source for the benchmarks on GitHub: https://github.com/gpanther/regex-libraries-benchmarks

Kotlin for Android Developers

We Android Developers have a difficult situation regarding our language limitation. As you may know, current Android development only support Java 6 (with some small improvements from Java 7), so we need to deal every day with a really old language that cuts our productivity and forces us to write tons of boilerplate and fragile code that it’s difficult to read an maintain.

Hopefully, at the end of the day we’re running over a Java Virtual Machine, so technically anything that can be run in a JVM is susceptible of being used to develop Android Apps. There are many languages that generate bytecode a JVM can execute, so some alternatives are starting to become popular these days, and Kotlin is one of them.

What is Kotlin?

Kotlin is a language that runs on the JVM. It’s being created by Jetbrains, the company behind powerful tools such as IntelliJ, one of the most famous IDEs for Java developers.

Kotlin is a really simple language. One of it’s main goals is to provide a powerful language with a simple and reduced syntax. Some of it’s features are:

  • It’s lightweight: this point is very important for Android. The library we need to add to our projects is as small as possible. In Android we have hard restrictions regarding method count, and Kotlin only adds around 6000 extra methods.
  • It’s interoperable: Kotlin is able to communicate with Java language seamlessly. This means we can use any existing Java library in our Kotlin code, so even though the language is young, we already have thousands of libraries we can work with. Besides, Kotlin code can also be used from Java code, which means we can create software that uses both languages. You can start writing new features in Kotlin and keep the rest of codebase in Java.
  • It’s a strongly-typed language: though you barely need to specify any types throughout the code, because the compiler is able to infer the type of variables or the return types of the functions in almost every situations. So you get the best of both worlds: a concise and safe language.
  • It’s null safe: One of the biggest problems of Java is null. You can’t specify when a variable or parameter can be null, so lots of NullPointerException will happen, and they are really hard to detect while coding. Kotlin uses explicit nullity, which will force us check nulls when necessary.

Kotlin is currently in version 1.0.0 Beta 3, but can expect the final version very soon. It’s quite ready for production anyway, there are already many companies successfully using it.

Why Kotlin is great for Android?

Basically because all its features fit perfectly well in the Android ecosystem. The library is small enough to let us work without proguard during development. It’s size is equivalent to support-v4 library, and there are some other libraries we use in amost every projects that are even bigger.

Besides, Android Studio (the official Android IDE) is built over IntelliJ. This means our IDE have an excellent support to work with this language. We can configure our project in seconds and keep using the IDE as we are used to do. We can keep using Gradle and all the run and debug features the IDE provides. It’s literally the same as writing the App in Java.

And obviously, thanks to its interoperability, we can use the Android SDK without any problems from Kotlin code. In fact, some parts of the SDK are even easier to use, because the interoperability is intelligent, and it for instance maps getters and setters to Kotlin properties, or let us write listeners as closures.

How to start using Kotlin in Android

It’s really easy. Just follow these steps:

  • Download Kotlin plugin from the IDE plugins sections
  • Create a Kotlin class in your module
  • Use the action “Configure Kotlin in Project…”
  • Enjoy

Some features

Kotlin has a lot of awesome features I won’t be able to explain here today. If you want to continue learning about it, you can check my blog and read my book. But today I’ll explain some interesting stuff I hope it makes you want more.

Null safety

As I mentioned before, Kotlin is null safe. If a type can be null we need to specify it by setting an ? after the type. From that point, every time we want to use a variable that uses that type, we need to check nullity.

For instance, this code won’t compile:

var artist: Artist? = null

artist.print()

The second line will show an error, because the nullity wasn’t checked. We could do something like this:

if (artist != null) {

    artist.print()

}

This shows another great Kotlin feature: Smart casting. If we’ve checked the type of a variable, we don’t need to cast it inside the scope of that check. So we now can use artist as variable of type Artist inside the if. This works with any other check we may do (like after checking the instance type).

We have a simpler way to check nullity, by using ? before calling a function of the object. And we can even provide an alternative by using the Elvis operator ?:

val name = artist?.name ?: ""

Data classes

In Java, if we want to create a data class, or POJO class (a class that only saves some state), we’d need to create a class with lots fields, getters and setters, and probably a toString and an equals class:

public class Artist {
    private long id;
    private String name;
    private String url;
    private String mbid;

    public long getId() {
        return id;
    }

    public void setId(long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getMbid() {
        return mbid;
    }

    public void setMbid(String mbid) {
        this.mbid = mbid;
    }

    @Override public String toString() {
        return "Artist{" +
                "id=" + id +
                ", name='" + name + '\'' +
                ", url='" + url + '\'' +
                ", mbid='" + mbid + '\'' +
                '}';
    }
}

In Kotlin, all the previous code can be substituted by this:

data class Artist (

    var id: Long,
    var name: String,
    var url: String,
    var mbid: String)

Kotlin uses properties instead of fields. A property is basically a field plus its getter and setter. We can declare those properties directly in the constructor, that you can see is defined right after the name of the class, saving us some lines if we are not modifying the entry values.

The data modifier provides some extra features: a readable toString(), an equals() based on the properties defined in the constructor, a copy function, and even a set of component functions that let us split an object into variables. Something like this:

val (id, name, url, mbid) = artist

Interoperability

We have some great interoperability features that help a lot in Android. One of them is the mapping of interfaces with a single method to a lambda. So a click listener like this one:

view.setOnClickListener(object : View.OnClickListener {
    override fun onClick(v: View) {
        toast("Click")

    }

})

can be converted into this:

view.setOnClickListener { toast("Click") }

Besides, getters and setters are mapped automatically to properties. This doesn’t add any kind of overhead, because the bytecode will in fact just call to the original getters and setters. These are some examples:

supportActionBar.title = title
textView.text = title
contactsList.adapter = ContactsAdapter()

Lambdas

Lambdas will save tons of code, but the important thing is that it will let us do things that are impossible (or too verbose) without them. With them we can start thinking in a more functional way. A lambda is simply a way to specify a type that defines a function. We can for instance define a variable like this:

val listener: (View) -> Boolean

This is a variable that is able to declare a function that receives a view and returns a function. A closure is the way we have to define what the function will do:

val listener = { view: View -> view is TextView }

The previous function will get a View and return true if the view is an instance of TextView. Ad the compiler is able to infer the type, we don’t need to specify it. We can be more explicit if we want by the way:

val listener: (View) -> Boolean = { view -> view is TextView }

With lambdas, we can prevent the use of callback interfaces. We can just set the function we want to be called after and operation finishes:

fun asyncOperation(value: Int, callback: (Boolean) -> Unit) {
    ...
    callback(true)

}

asyncOperation(5) { result -> println("result: $result") }

But there is a simpler alternative, because if a function only has one parameter, we can use the reserved word it:

asyncOperation(5) { println("result: $it") }

Collections

Collections in Kotlin are really powerful. They are written over Java collections, so it means when we get a result from any Java library (or the Android SDK for instance), we still be able to use all the functions Kotlin provides.

The available collections we have are:

  • Iterable
  • Collection
  • List
  • Set
  • Map

And we can apply a lot of operations to them. These are a few of them:

  • filter
  • sort
  • map
  • zip
  • dropWhile
  • first
  • firstOrNull
  • last
  • lastOrNull
  • fold

You may see the complete set of operations in this article. So a complex operation such as a filters, a sort and a transformation can be quite explicitly defined:

parsedContacts
    .filter { it.name != null && it.image != null }
    .sortedBy { it.name }
    .map { Contact(it.id, it.name!!, it.image!!) }

We can define new immutable lists in a simple way:

val list = listOf(1, 2, 3, 4, 5)

Or if we want it to be mutable (we can add and remove items), we have a very nice way to access and modify the items, the same way we’d do with an array:

mutableList[0] = 1
val first = mutableList[0]

And the same thing with maps:

map["key"] = 1
val value = map["key"]

This is possible because we can overload some basic operators when implementing our own classes.

Extension functions

Extensions functions will let us add extra behaviour to classes we can’t modify, because they belong to a library or an SDK for instance.

We could create an inflate() function for ViewGroup class:

fun ViewGroup.inflate(layoutRes: Int): View {
    return LayoutInflater.from(context).inflate(layoutRes, this, false)
}

And from now on, we can just use it as any other method:

val v = parent.inflate(R.layout.view_item)

Or even a loadUrl function to an ImageView. We can make use of Picasso library inside the function:

fun ImageView.loadUrl(url: String) {
    Picasso.with(context).load(url).into(this)
}

All ImageViews can use this function now:

contactImage.loadUrl(contact.imageUrl)

Interface

Interfaces in Kotlin can contain code, which simulates a simple multiple inheritance. A class can be composed by the code of many classes, not just a parent. The interfaces can’t, however, keep state. So if we define a property in an interface, the class that implements it must override that property and provide a value.

An example could be a ToolbarManager class that will deal with the Toolbar:

interface ToolbarManager {

    val toolbar: Toolbar


    fun initToolbar() {
        toolbar.inflateMenu(R.menu.menu_main)
        toolbar.setOnMenuItemClickListener {
            when (it.itemId) {
                R.id.action_settings -> App.instance.toast("Settings")
                else -> App.instance.toast("Unknown option")
            }
            true
        }
    }
}

This interface can be used by all the activities or fragments that use a Toolbar:

class MainActivity : AppCompatActivity(), ToolbarManager {

    override val toolbar by lazy { find<Toolbar>(R.id.toolbar) }


    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        initToolbar()
        ...
    }
}


When expression

When is the alternative to switch in Java, but much more powerful. It can literally check anything. A simple example:

val cost = when(x) {
    in 1..10 -> "cheap"
    in 10..100 -> "regular"
    in 100..1000 -> "expensive"
    in specialValues -> "special value!"
    else -> "not rated"
}

We can check that a number is inside a range, or even inside a collection (specialValues is a list). But if we don’t set the parameter to when, we can just check whatever we need. Something as crazy as this would be possible:

val res = when {
    x in 1..10 -> "cheap"
    s.contains("hello") -> "it's a welcome!"
    v is ViewGroup -> "child count: ${v.getChildCount()}"
    else -> ""
}

Kotlin Android Extensions

Another tool the Kotlin team provides for Android developers. It will be able to read an XML and inject a set of properties into an activity, fragment or view with the views inside the layout casted to its proper type.

If we have this layout:

<FrameLayout
    xmlns:android="..."
    android:id="@+id/frameLayout"
    android:orientation="vertical"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

    <TextView
        android:id="@+id/welcomeText"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"/>

</FrameLayout>

We just need to add this synthetic import:

import kotlinx.android.synthetic.main.*

And from that moment, we can use the views in our Activity:

override fun onCreate(savedInstanceState: Bundle?) {
    super<BaseActivity>.onCreate(savedInstanceState)
    setContentView(R.id.main)
    frameLayout.setVisibility(View.VISIBLE)
    welcomeText.setText("I´m a welcome text!!")
}

It’s that simple.

Anko

Anko is a library the Kotlin team is developing to simplify Android development. It’s main goal is to provide a DSL to declare views using Kotlin code:

verticalLayout {
    val name = editText()
    button("Say Hello") {
        onClick { toast("Hello, ${name.text}!") }
    }
}

But it includes many other useful things. For instance, a great way to navigate to other activities:

startActivity<DetailActivity>("id" to res.id, "name" to res.name)

It just receives a set of Pairs an adds them to a bundle when creating the intent to navigate to the activity (specified as the type of the function).

We also have direct access to system services:

context.layoutInflater
context.notificationManager
context.sensorManager
context.vibrator

Or easy ways to create toasts and alerts:

toast(R.string.message)
longToast("Wow, such a duration")


alert("Yes /no Alert") {
    positiveButton("Yes") { submit() }
    negativeButton("No") {}
}.show()

And one I love, an simple easy DSL to deal with asynchrony:

async {
    val result = longRequest()
    uiThread { bindForecast(result) }
}

It also provides a good set of tools to work with SQLite and cursors. The ManagedSQLiteOpenHelper provides a use method which will receive the database and can call directly to its functions:

dbHelper.use {
    select("TABLE_NAME").where("_id = {id}", "id" to 20)
}

As you can see, it has a nice select DSL, but also a simple create function:

db.createTable("TABLE_NAME", true,
        "_id" to INTEGER + PRIMARY_KEY,
        "name" to TEXT)

When you are dealing with a cursor, you can make use of some extension functions such as parseList, parseOpt or parseClass, that will help with parsing the result.

Conclusion

As you can see, Kotlin simplifies Android development in many different points. It will boost your productivity and will let you solve usual problems in a very different and simpler way.

My recommendation is that you at least try it and play a little with it. It’s a really fun language and very easy to learn. If you think this language is for you, you may continue learning it by getting Kotlin for Android Developers book.

Microservices and Java EE

Microservices based architectures are everywhere these days. We learn so much about how today’s innovators, like Netflix and Amazon use these to be even more successful in generating more business. But what about all of us, who are using Java EE application servers and write classical systems? Are we all doing it wrong? And how could we make our technical designs fit for the future?

Monoliths
First of all, let’s look into those classical systems. Or called monolithic applications. Even if the word has a bad smell these days, this is the way we built software for a very long time. It basically describes the fact, that we build individual applications to fulfill a certain functionality.
And monolithic does refer to exactly what Java EE or better the initial Java 2 Enterprise Edition was designed for. Centralized applications which could be scaled and clustered but not necessary build to be resilient by design. They relied on infrastructure and operations in failure scenarios most of the time.

Traditionally, Java EE applications followed some core patterns and were separated into three main layers: presentation, business, and integration. The presentation layer was packaged in Web Application Archives (WARs) while business and integration logic went into separate Java Archives (JARs). Bundled together as one deployment unit, a so-called Enterprise Archive (EAR) was created. The technology and best practices around Java EE have always been sufficient to build a well-designed monolith application. But most enterprise-grade projects tend to lose a close focus on architecture.  This is why sometimes a well designed spaghetti ball was the best way to visualize the project dependencies and internal structures. And when this happened, we’ve quickly experienced some significant drawbacks. Because everything was too coupled and integrated even making small changes required a lot of work (or sometimes major refactorings) and before putting the reworked parts into production, the applications also had to be tested with great care and from beginning to end.
The whole application was a lot more than just programmed artifacts: it also consisted of uncountable deployment descriptors and server configuration files, in addition to properties for relevant third-party environments.

The high risks in changes and the complexity of bringing new configurations into productions lead to less and less releases. A new release saw the light of day once or twice a year. Even the team structures were heavily influenced by these monolithic software architectures. The multi-month test cycle might have been the most visible proof. But besides that, projects with lifespans longer than five years tended to have huge bugs and feature databases. And if this wasn’t hard enough, the testing was barely qualified–no acceptance tests, and hardly any written business requirements or identifiable domains in design and usability.

Handling these kinds of enterprise projects was a multiple team effort and required a lot of people to oversee the entire project. From a software design perspective, the resulting applications had a very technical layering. Business components or domains were mostly driven by existing database designs or dated business object definitions. Our industry had to learn those lessons and we managed not only to keep these enterprise monoliths under control, but also invented new paradigms and methodologies to manage them even better.

spaghettiSo, even if the word „monolith“ is considered synonym for a badly designed piece of software today, those architectures had a number of benefits. Monolithic applications are simple to develop since IDEs and other development tools are oriented around developing a single application. Its a single archive that can be shared with different teams and encapsulates all the functionality in there. Plus, the industry standard around Java EE gave enterprises solid access to the resources needed to not only build but also operate those applications. Software vendors have build a solid knowledge base around Java EE and sourcing isn’t a big issue in general. And having worked with them since more than 15 years by now, the industry is finally able to manufacture these applications in a more or less productized and standardized way. We know which build tools to use, which processes scale in large teams and how to scale those applications. And even integration testing got a lot easier since tool like Arquillian emerged. We still are paying a price for the convenience of a mature solution like Java EE. Code-bases can grow very large. When applications stay in business for longer, they get more and more complex and harder to understand for the development teams. And even if we know how to configure application servers, the one or two special settings in each project still cause major headaches in operations.

Microservices
microservicesBut our industry doesn’t stand still. And the next evolution of system architecture and design just saw the light of day a couple of years ago. With the growing complexity of centralized integration components and the additional overhead in the connected applications the search for something more lightweight and more resilient began. And the whole theory finally shifted away from big and heavyweight infrastructures and designs. Alongside with this IT departments started to revisit application servers alongside with wordy protocols and interface technologies.

The technical design went back to more handy artifacts and services with the proven impracticality of most of the service implementation in SOA- and ESB-based projects. Instead of intelligent routing and transformations, microservices use simple routes and encapsulate logic in the endpoint itself. And even if the name implies a defined size, there isn’t one.  Microservices are about having a single business purpose. And even more vexing for enterprise settings, the most effective runtime for microservices isn’t necessarily a full-blown application server. It might just be a servlet engine or that the JVM is already sufficient as an execution environment. With the growing runtime variations and the broader variety of programming language choices, this development turned into yet another operations nightmare. And even developers today are a little lost when it comes to defining microservices and how to apply this design to existing applications.

Microservices are designed to be small, stateless, in(ter)dependent & fully contained applications. Ideally able to deploy them everywhere, because the deployment contains all the needed parts.

Microservices are designed to be small.  But defining “small” is subjective.  Some of the estimation techniques like lines of code, function points, use cases may be used.  But generally “small” isn’t about size.
In the book Building Microservices the author Sam Newman suggests a few techniques to define the size of microservice, they are:

  • small enough to be owned by a small agile development team,
  • re-writable within one or two agile sprints ( typically two to four weeks) or
  • the complexity does not require to further divide the service up

A stateless application handles every request with the information contained only within it. Microservices must be stateless and it must service the request without remembering the previous communications from the external system.

Microservices must service the request independently, it may collaborate with other microservices within the eco-system.  For example, a microservice that generates a unique report after interacting with other microservices is an interdependent system. In this scenario, other microservices which only provide the necessary data to reporting microservices may be independent services. A full stack application is individually deploy-able. It has its own server, network & hosting environment.  The business logic, data model and the service interface (API / UI) must be part of the entire system.  A Microservice must be a full stack application.

Why now? And why me?
“I’ve been going through enough already and the next Java EE version is already under development. We’re not even using latest Java EE 7. There are so many productivity features coming: I don’t care if I build a monolith if it just does the job and we can handle it.” And I do understand these thoughts. I like Java EE as much as you probably do and I was really intrigued to find out why microservices evolved lately. The answer to those two questions might not be a simple one: But lets try:

Looking at all the problems in our industry and the still very high amount of projects failing, it is perfectly fine to understand the need to grow and eliminate problems. A big part of new hypes and revamped methodologies is the human will to grow.

And instead of “never touch a running system” our industry usually want’s to do something better than the last time.
So, to answer the second part of the question first: “You probably want to look into this, because not doing anything isn’t a solution.”

As a developer, architect or QA engineer we basically all signed up for live long learning. And I can only speak for myself at this point, but this is a very big part of why I like this job so much. The first part of the question isn’t so easy to answer.

Innovation and constant improvement are the drivers behind enterprises and enterprise-grade projects. Without innovation, there will be outdated and expensive infrastructure components (e.g., host systems) that are kept alive way longer than the software they are running was designed for. Without constant validation of the status quo, there will be implicit or explicit vendor lock-in. Aging middleware runs into extended support and only a few suppliers will still be able to provide know-how to develop for it. Platform stacks that stay behind the latest standards attempt to introduce quick and dirty solutions that ultimately produce technical debt. The most prominent and quickest moving projects in the microservices space are Open Source projects. Netflix OSS, Spring, Camel, Fabric8 and others are prominent examples. And it became a lot easier to operate polyglot full-stack applications with today’s PaaS offerings which are also backed by Open Source projects like Docker and Kubernetes. In our fast paced world the lead times for legally induced software changes or simple bug fixes are shrinking. Very few enterprises still have the luxury to work with month long production cycles and the need for software to generate real value for business emerges even more. And this is not only true for completely software driven companies like Uber, NetFlix, Amazon and others.

We need to build systems for flexibility and resiliency, not just efficiency and robustness.  And we need to start building them today with what we have.

And I really want to make sure you’re reading this statement the right way: I am not saying, that everything from today on is a microservice.

  • But we should be aware of the areas where they can help and be able to
  • change existing applications towards the new approach when it makes sense.
  • and we want to be able to be a good consultant for those asking about the topic

And Java EE isn’t going anywhere soon. It will be complemented and the polyglot world will be moving in in various places, but we’re not going to get rid of it soon.  And this is the good news.

Learn more about how to evolve your Java EE applications into microservices by downloading my free eBook from developers.redhat.com. Make sure to re-watch my O’Reilly Webcast about “Java EE Microservices Architecture”  and also follow my blog.eisele.net with some more technical information about WildFly Swarm, Docker and Kubernetes with OpenShift.

The importance of tuning your thread pools

Whether you know it or not, your Java web application is most likely using a thread pool to handle incoming requests. This is an implementation detail that many overlook, but sooner or later you will need to understand how the pool is used, and how to correctly tune it for your application. This article aims to explain the threaded model, what a thread pool is, and what you need to do to correctly configure them.

Single Threaded

Let us start with some basics, and progress with the evolution of the threaded model. No matter which application server or framework you use, Tomcat, Dropwizard, Jetty, they all use the same fundamental approach. Buried deep inside the web server is a socket. This socket is listening for incoming TCP connections, and accepting them. Once accepted, data can be read from the newly established TCP connection, parsed, and turned into a HTTP request. This request is then handed off to the web application, to do with what it wants.

To provide an understanding of the role of threads, we won’t use an application server, instead we will build a simple server from scratch. This server mirrors what most application servers do under the hood. To start with, a single threaded web server may look like this:

ServerSocket listener = new ServerSocket(8080);
try {
 while (true) {
   Socket socket = listener.accept();
   try {
     handleRequest(socket);
   } catch (IOException e) {
     e.printStackTrace();
   }
 }
} finally {
 listener.close();
}

This code creates a ServerSocket on port 8080, then in a tight loop the ServerSocket checks for new connections to accept. Once accepted the socket is passed to a handleRequest method. That method would typically read the HTTP request, do whatever process is needed, and write a response. In this simple example, handleRequest reads a single line, and returns a short HTTP response. It would be normal for handleRequest to do something more complex, such as reading from a database, or conducting some other kind of IO.

final static String response =
   “HTTP/1.0 200 OK\r\n” +
   “Content-type: text/plain\r\n” +
   “\r\n” +
   “Hello World\r\n”;

public static void handleRequest(Socket socket) throws IOException {
 // Read the input stream, and return “200 OK”
 try {
   BufferedReader in = new BufferedReader(
     new InputStreamReader(socket.getInputStream()));
   log.info(in.readLine());

   OutputStream out = socket.getOutputStream();
   out.write(response.getBytes(StandardCharsets.UTF_8));
 } finally {
   socket.close();
 }
}

As there is only a single thread handling all accepted sockets, each request must be fully handled, before accepting the next. In a real application it could be normal for the equivalent handleRequest method to take on the order of 100 milliseconds to return. If this was the case, the server would be limited to handling only 10 requests per second, one after the other.

Multi-threaded

Even though handleRequest may be blocked on IO, the CPU is free to handle more requests. With a single threaded approach this is not possible. Thus this server can be improved to allow concurrent operations, via creating multiple threads:

public static class HandleRequestRunnable implements Runnable {

 final Socket socket;

 public HandleRequestRunnable(Socket socket) {
   this.socket = socket;
 }

 public void run() {
   try {
     handleRequest(socket);
   } catch (IOException e) {
     e.printStackTrace();
   }
 }
}

ServerSocket listener = new ServerSocket(8080);
try {
 while (true) {
   Socket socket = listener.accept();
   new Thread(new HandleRequestRunnable(socket)).start();
 }
} finally {
 listener.close();
}

Here, accept() is still called in a tight loop within a single thread, but once a TCP connection is accepted, and a socket available, a new thread is spawned. This spawned thread executes a HandleRequestRunnable, which simply calls the same handleRequest method from above.

Creating the new thread, now frees up the original accept() thread to handle more TCP connections, and allows the application to handle requests concurrently. This technique is referred to as a “thread per request”, and is the most popular approach taken. It is worth noting there are other approaches, such as the event driven asynchronous model NGINX and Node.js deploy, but they don’t use thread pools, and thus are out of scope for this article.

In the thread per request approach, creating a new thread (and later destroying it) can be expensive, as both the JVM and the OS needs to allocate resources. Additionally in the above implementation, the number of threads being created is unbounded. Being unbounded is very problematic, as it can quickly led to resource exhaustion.

Resource exhaustion

Each thread requires a certain amount of memory for the stack. On recent 64bit JVMs, the default stack size is 1024KB. If the server receives a flood of requests, or the handleRequest method becomes slow, the server may end up with huge number of concurrent threads. Thus to manage 1000 concurrent requests, the 1000 threads would consume 1GB of the JVM’s RAM just for thread’s stacks. In addition the code executing in each thread will be creating objects on the heap needed to process the request. This very quickly adds up, and can exceed the heap space assigned to the JVM, putting pressure on the garbage collector, causing thrashing and eventually leading to OutOfMemoryErrors.

Not only consuming RAM, the threads may use other finite resources, such as file handles, or database connections. Exceeding these may led to other types of errors or crashes. Thus to avoid exhausting resources it is important to avoid unbounded data structures.

Not a panacea, but the stack size issue can be somewhat mitigated by tuning the stack size with the -Xss flag. A smaller stack will reduce the per thread overhead, but potentially leads to StackOverflowErrors. Your mileage will vary, but for many applications the default 1024KB is excessive, and smaller 256KB or 512KB values might be more appropriate. The smallest value Java will allow is 160KB.

Thread pool

To avoid continuously creating new threads, and to bound the maximum number, a simple thread pool can be used. Simply put, the pool keeps track of all threads, creating new ones when needed up to an upper bound, and where possible reusing idle threads.

ServerSocket listener = new ServerSocket(8080);
ExecutorService executor = Executors.newFixedThreadPool(4);
try {
 while (true) {
   Socket socket = listener.accept();
   executor.submit( new HandleRequestRunnable(socket) );
 }
} finally {
 listener.close();
}

Now, instead of directly creating threads, this code uses an ExecutorService, which submits work (in the term of Runnables) to be executed across a pool of threads. In this example a fixed thread pool of four threads is used to handle all incoming requests. This bounds the number of “in-flight” requests, and thus places bounds on the resource usage.

In addition to newFixedThreadPool, the Executors utility class also provides a newCachedThreadPool method. This suffers from the earlier unbounded number of threads, but whenever possible makes use of previously created but now idle threads. Typically this type of pool is useful for short-lived requests that do not block on external resources.

ThreadPoolExecutors can be constructed directly, allowing for its behaviour to be customised. For example, the min and max number of threads within the pool can be defined, as well as policies for when threads are created and destroyed. An example of this is shown shortly.

Work queue

In the fixed thread pool case, the observant reader may wonder what happens if all threads are busy, and a new request comes in. Well the ThreadPoolExecutor may use a queue to hold pending requests before a thread becomes available. The Executors.newFixedThreadPool by default use an unbounded LinkedList. Again this leads to the resource exhaustion problem, albeit much slower since each queued request is smaller than a full thread, and will typically not be use as many resources. However, in our examples, each queued request is holding a socket which (depending on OS) would be consuming a file handle. This is the kind of resource that the operating system will limit, so it may not be best to hold on to it unless needed. Therefore it also makes sense to bound the size of the work queue.

public static ExecutorService newBoundedFixedThreadPool(int nThreads, int capacity) {
 return new ThreadPoolExecutor(nThreads, nThreads,
     0L, TimeUnit.MILLISECONDS,
     new LinkedBlockingQueue&lt;Runnable&gt;(capacity),
     new ThreadPoolExecutor.DiscardPolicy());
}

public static void boundedThreadPoolServerSocket() throws IOException {
 ServerSocket listener = new ServerSocket(8080);
 ExecutorService executor = newBoundedFixedThreadPool(4, 16);
 try {
   while (true) {
     Socket socket = listener.accept();
     executor.submit( new HandleRequestRunnable(socket) );
   }
 } finally {
   listener.close();
 }
}

Again, we create a thread pool, but instead of using the Executors.newFixedThreadPool helper method, we create the ThreadPoolExecutor ourselves, passing a bounded LinkedBlockingQueue capped to 16 elements. Alternatively an ArrayBlockingQueue could have be used, which is an implementation of a bounded buffer.

If all threads are busy, and the queue fills up, what happens next is defined by the last argument to the ThreadPoolExecutor. In this example, a DiscardPolicy is used, which simply discards any work that would overflow the queue. There are other policies, such as the AbortPolicy which throws an exception, or the CallerRunsPolicy which executes the job on the caller’s thread. This CallerRunsPolicy provides a simple way to self limit the rate jobs can be added, however, it could be harmful, blocking a thread that should stay unblocked.

A good default policy is to Discard or Abort, which both drop the work. In these cases it would be easy to return a simple error to the client, such as a HTTP 503 “Service unavailable”. Some would argue that the queue size could just be increased, and then all work would eventually be run. However, users are unwilling to wait forever, and if fundamentally the rate at which work comes in, exceeds the rate it can be executed, then the queue will grow indefinitely. Instead the queue should only be used to smooth out bursts of requests, or handle short stalls in processing. In normal operation the queue should be empty.

How many threads?

Now we understand how to create a thread pool, the hard question is how many threads should be available? We have determined that the maximum number should be bounded to not cause resource exhaustion. This includes all types of resources, memory (stack and heap), open file handles, open TCP connections, the number of connections a remote database can handle, and any other finite resource. Conversely, if the threads are CPU bound instead of IO bound, then the number of physical cores should be considered finite, and perhaps no more than one thread per core should be created.

This all depends on the work the application is doing. A user should run load tests using various pool sizes, and a realistic mix of requests. Each time increasing their thread pool size until breaking point. This makes it possible to find the upper bound, for when resources are exhausted. In some cases it may be prudent to increase the number of available resources, for example making more RAM available to the JVM, or tuning the OS to allow for more file handles. However, at some point the theoretical upper bound will be reached, and should be noted, but this is not the end of the story.

Little’s Law

littlelaw

Queuing theory, in particular, Little’s Law, can be used to help understand the properties of the thread pool. In simple terms, Little’s Law describes the relationship between three variables; L the number of requests in-flight, λ the rate at which new requests arrive, and W the average time to handle the request. For example, if there are 10 requests arriving per second, and each request takes one second to process, there is an average of 10 request in-flight at any time. In our example, this maps to using 10 threads. If the time to process a single request is doubled, then the average in-flight requests also doubles to 20, and thus requires 20 threads.

Understanding the impact that execution time has on in-flight request is very important. It is common for some backend resource (such as a database) to stall, causing requests to take longer to process, quickly exhausting a thread pool. Therefore the theoretical upper bound may not be an appropriate limit for the pool size. Instead, a limit should be placed on execution time, and used in combination with the theoretical upper bound.

For example, let’s say the maximum in-flight requests that can be handled is 1000 before the JVM exceeds its memory allocation. If we budget for each request to take no longer than 30 seconds, we should expect in the worst case to handle no more than 33 ⅓ requests per second. However, if everything is working correctly, and requests take only 500ms to handle, the application can handle 2000 requests per second, on only 1000 threads. It may also be reasonable to specify that a queue can be used to smooth out short bursts of delay.

Why the hassle?

If the thread pool has too few threads, you run the risk of under utilising the resources, and turning users away unnecessarily. However, if too many threads are allowed, resource exhaustion occurs, which can be more damaging.

Not only can local resources be exhausted but it is possible to adversely impact others. Take for example, multiple applications querying the same backend database. Databases typically have a hard limit on the number of concurrent connections. If one misbehaving unbounded application consumes all these connections, it would block the others from accessing the database. Causing a widespread outage.

Even worse, a cascading failure could occur. Imagine an environment with multiple instances of a single application, behind a common load balancer. If one of the instances begins to run out of memory due to excessive in-flight requests, the JVM will spend more time garbage collecting, and less time handling the requests. That slow down, will reduce the capacity of that one instance, and force the other instances to handle a higher fraction of incoming requests. As they now handle more requests, with their unbounded thread pools, the same problem occurs. They run out of memory, and again begin aggressively garbage collecting. This vicious cycle cascades across all instances, until there is a systemic failure.

Far too often I’ve observed that load testing is not conducted, and an arbitrarily high number of threads is allowed. In the common case the application can happily process requests at the incoming rate using a small number of threads. If however, processing the requests depends on a remote service, and that service temporarily slows down, the impact of increasing W (the average processing time) can very quickly exhaust the pool. Because the application was never load tested at the maximum number, all the resource exhaustion issues outlined before are exhibited.

How many thread pools?

In microservice, or service oriented architectures (SOA), it is normal to access multiple remote backend services. This setup is particularly susceptible to failures, and thought should be made in gracefully dealing with them. If a remote service’s performance degrades, it can cause the thread pool to quickly hit its limit, and subsequent requests are dropped. However, not all requests may require this unhealthy backend, but since the thread pool is full these requests are needlessly dropped.

The failure of each backend can be isolated by providing backend specific thread pools. In this pattern, there is still a single request worker pool, but if the request needs to call a remote service, the work is transferred to that backend’s thread pool. This leaves the main request pool unburden by a single slow backend. Then only requests needing that particular backend pool are impacted when it malfunctions.

A final benefit of multiple thread pools, is it helps avoid a form of deadlock. If every available thread becomes blocked on a result of a yet to be processed request, then a deadlock occurs, and no thread is able to move forward. When using multiple pools, and having a good understanding of the work they execute, this issue can be somewhat mitigated.

Deadlines and other best practices

A common best practice is to ensure there is a deadline on all remote calls. That is, if the remote service does not respond within a reasonable time, the request is abandoned. The same technique can be used for work within the thread pool. Specifically, if the thread is processing one request for longer than a defined deadline, it should be terminated. Making room for a new request, and placing an upper bound on W. This may seem like a waste, but if the user (which might typically be a web browser) is waiting for a response, then after 30 seconds the browser might just give up anyway, or more likely the user becomes impatient and navigates away.

Failing fast, is another approach that can be taken when creating pools for backends. If the backend has failed, the thread pool will quickly fill up with request waiting to connect to the unresponsive backend. Instead, the backend can be flagged as unhealthy, all subsequent requests could fail instantly instead of needlessly waiting. Note however, that a mechanism is needed to determine when the backend has become healthy again.

Finally, if a request will need to call multiple backends independently, it should be possible to call them in parallel, instead of sequentially. This would reduce the wait time, at the cost of increased threads.

Luckily, there is a great library, hystrix, which packages many of these best practices and exposes them in a simple and safe way.

Conclusion

Hopefully this article has improved your understanding of thread pools. By understanding the application’s needs, and using a combination of the maximum thread count, and the average response time, an appropriate thread pool can be determined. Not only will this avoid cascading failures, but help plan and provision your service.

Even though your application may not explicitly use a thread pool, they are implicitly used by your application server or higher level abstraction. Tomcat, JBoss, Undertow, Dropwizard all provides multiple tunables to their thread pools (the pool which your servlet is executed).

Like what you read, find more articles like this on bramp.net, or follow @TheBramp.

JIT Compiler, Inlining and Escape Analysis

Just-in-time (JIT)

Just-in-time (JIT) compiler is the brain of the Java Virtual Machine. Nothing in the JVM affects performance more than the JIT compiler.

For a moment let’s step back and see examples of compiled and non compiled languages.

Languages like Go, C and C++ are called compiled languages because their programs are distributed as binary (compiled) code, which is targeted to a particular CPU.

On the other hand languages like PHP and Perl, are interpreted. The same program source code can be run on any CPU as long as the machine has the interpreter. The interpreter translates each line of the program into binary code as that line is executed.

Java attempts to find a middle ground here. Java applications are compiled, but instead of being compiled into a specific binary for a specific CPU, they are compiled into a bytecode. This gives Java the platform independence of an interpreted language. But Java doesn’t stop here.

In a typical program, only a small sections of the code is executed frequently, and the performance of an application depends primarily on how fast those sections of code are executed. These critical sections are known as the hot spots of the application.
The more times JVM executes a particular code section, the more information it has about it. This allows the JVM to make smart/optimized decisions and compile small hot code into a CPU specific binary. This process is called Just in time compilation (JIT).

Now let’s run a small program and observe JIT compilation.

public class App {
  public static void main(String[] args) {
    long sumOfEvens = 0;
    for(int i = 0; i < 100000; i++) {
      if(isEven(i)) {
        sumOfEvens += i;
      }
    }
    System.out.println(sumOfEvens);
  }

  public static boolean isEven(int number) {
    return number % 2 == 0;
  }
}


#### Run
javac App.java && \
java -server \
     -XX:-TieredCompilation \
     -XX:+PrintCompilation \
              - XX:CompileThreshold=100000 App


#### Output
87    1             App::isEven (16 bytes)
2499950000

Output tells us that isEven method is compiled. I intentionally disabled TieredCompilation to get only the most frequently compiled code.

JIT compiled code will give a great performance boost to your application. Want to check it ? Write a simple benchmark code.

Inlining

Inlining is one of the most important optimizations that JIT compiler makes. Inlining replaces a method call with the body of the method to avoid the overhead of method invocation.

Let’s run the same program again and this time observe inlining.

#### Run
javac App.java && \
java -server \
     -XX:+UnlockDiagnosticVMOptions \
     -XX:+PrintInlining \
     -XX:-TieredCompilation App

#### Output
@ 12   App::isEven (16 bytes)   inline (hot)
2499950000

Inlining again will give a great performance boost to your application.

Escape Analysis

Escape analysis is a technique by which the JIT Compiler can analyze the scope of a new object’s uses and decide whether to allocate it on the Java heap or (Wrong: on the method stack) [Update] handle object members directly (scalar replacement)[/Update]. It also eliminates locks for all non-globally escaping objects

Let’s run a small program and observe garbage collection.

public class App {
  public static void main(String[] args) {
    long sumOfArea = 0;
    for(int i = 0; i < 10000000; i++) {
      Rectangle rect = new Rectangle(i+5, i+10);
      sumOfArea += rect.getArea();
    }
    System.out.println(sumOfArea);
  }

  static class Rectangle {
    private int height;
    private int width;

    public Rectangle(int height, int width) {
      this.height = height;
      this.width = width;
    }

    public int getArea() {
      return height * width;
    }
  }
}

In this example Rectangle objects are created and available only within a loop, they are characterised as NoEscape and can handle object members directly (scalar replacement) instead of allocating objects in heap. Specifically, this means that no garbage collection will happen.

Let’s run the program without EscapeAnalysis.

#### Run
javac App.java && \
java -server \
     -verbose:gc \
     -XX:-DoEscapeAnalysis App

#### Output
[GC (Allocation Failure)  65536K->472K(251392K), 0.0007449 secs]
[GC (Allocation Failure)  66008K->440K(251392K), 0.0008727 secs]
[GC (Allocation Failure)  65976K->424K(251392K), 0.0005484 secs]
16818403770368

As you can see GC kicked-in. Allocation Failure means no more space is left in young generation to allocate objects. So, it is normal cause of young GC.

This time let’s run it with EscapeAnalysis.

#### Run
javac App.java && \
java -server \
    -verbose:gc \
    -XX:+DoEscapeAnalysis App

#### Output
16818403770368

No GC happened this time. Which basically means creating short lived and narrow scoped objects is not necessarily introducing garbage.

DoEscapeAnalysis option is enabled by default. Note that only Java HotSpot Server VM supports this option.

As a consequence, we all should avoid premature optimization, focus on writing more readable/maintainable code and let JVM do it’s job.

Quick Web App Prototyping with Spring Boot & MongoDB

Back in one of my previous projects I was asked to produce a little contingency application. The schedule was tight and the scope simple. The in-house coding standard is PHP, so trying to get a classic Java EE stack in place would have been a real challenge. And, to be really honest, completely oversized. So, what then? I took the chance and gave Spring a try. I used it before, but in old versions, hidden away in the tech stack of the portal software I was plagued with at this time.

My goal was to have something the WebOps can simply put on a server with Java installed and run it. No fiddling with dozens of XML configurations and memory fine tuning. Just as easy as java -jar application.jar.
It was the perfect call for “Spring Boot”. This Spring project is all about making it easy to bring you, the developer, up to speed and take away the need of loads of configuration and boilerplate coding.

Another thing my project was crying for was a document-oriented data storage. I mean, the main purpose of the application was to offer a digital version of a real-world paper form. So why create a relational mess if we can represent the document as a document?! I used MongoDB in a couple of small projects before, so I decided to go with it.

What has this got to do with this article? Well, I will show you how quickly you can bring together all the bits and pieces needed for a web application. Spring Boot will make a lot of things fairly easy and will keep the code minimal. And at the end you will have a JAR file, which is executable and can be deployed by just dropping it onto a server. Your WebOps will love you for it.

Let’s imagine we are about to create the next big product administration web application. As it is the next big thing, it needs a big name: Productr (this is the reason I am a software engineer and not in sales or marketing…).
Productr will do amazing things and this article will show you its early stages, which are:

  • providing a simple REST interface to query all available products
  • loading these products from a MongoDB
  • providing a production-ready monitoring facility
  • displaying all products by using a JavaScript UI

All you need to start is:

  • Java 8
  • Maven
  • Your favourite IDE (IntelliJ, Eclipse, vi, edlin, a butterfly…)
  • A browser (ok, or Internet Explorer / MS Edge, but who would really want this?!)

And for the impatient, the code is also available on GitHub.

Let’s get started

Create a pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.3.0.RELEASE</version>
    </parent>

    <modelVersion>4.0.0</modelVersion>
    <groupId>net.h0lg.tutorials.rapid</groupId>
    <artifactId>rapid-resting</artifactId>
    <version>1.0</version>


    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>


    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
</project>

In these few lines a lot of stuff is already happening. Most important is the defined parent project. This will bring us a lot of useful and needed dependencies like logging, the Tomcat runtime and lots more. Thanks to Spring’s modularity, everything is re-configurable via pom.xml or dependency injection. For getting everything up quickly the defaults are absolutely fine. (Convention over configuration, anybody?)

Now, create the obligatory Maven folder structure:

mkdir -p src/main/java src/main/resources src/test/java src/test/resources

And we are settled.

Start the engines

Let’s get to work. We want to offer a REST interface to get access to our huge amount of products. So let’s start with creating a REST collection available under /api/products. To do so we have to do a few things:

  1. Our “data model” holding all information about our incredible products needs to be created
  2. We need a controller offering a method which does everything necessary to answer a GET request
  3. Create the main entry point for our application

The data model is pretty simple and done quickly. Just create a package called demo.model and a class called Product in it. The Product class is very straightforward:

package demo.model;

import java.io.Serializable;

/**
 * Our very important and sophisticated data model
 */
public class Product implements Serializable {

    String productId;
    String name;
    String vendor;

    public String getProductId() {
        return productId;
    }

    public void setProductId(String productId) {
        this.productId = productId;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getVendor() {
        return vendor;
    }

    public void setVendor(String vendor) {
        this.vendor = vendor;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Product product = (Product) o;

        if (getProductId() != null ? !getProductId().equals(product.getProductId()) : product.getProductId() != null)
            return false;
        if (getName() != null ? !getName().equals(product.getName()) : product.getName() != null) return false;
        return !(getVendor() != null ? !getVendor().equals(product.getVendor()) : product.getVendor() != null);

    }

    @Override
    public int hashCode() {
        int result = getProductId() != null ? getProductId().hashCode() : 0;
        result = 31 * result + (getName() != null ? getName().hashCode() : 0);
        result = 31 * result + (getVendor() != null ? getVendor().hashCode() : 0);
        return result;
    }
}

Our product has the incredible amount of 3 properties: an alphanumeric product ID, a name and a vendor (just the name, to keep things simple). It is serialisable and the getters, setters and the methods equals() & hashCode() are implemented by using my IDE’s code generation.

Alright, so creating a controller with a method to offer the GET listener it is now. Go back to your favourite IDE and create the package demo.controller and a class called ProductsController with the following content:

package demo.controller;

import demo.model.Product;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

/**
 * This controller provides the REST methods
 */
@RestController
@RequestMapping(value = "/", method = RequestMethod.GET)
public class ProductsController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        List products = new ArrayList();

        return products;
    }

}

This is really everything you need to provide a REST interface. Ok, at the moment, an empty list is returned, but it is that easy to define.

The last thing missing is an entry point for our application. Just create a class called Productr in the package demo and give it the following content:

package demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * This is the entry point of our application
 */
@SpringBootApplication
public class ProductrApplication {

    public static void main (String... opts) {
        SpringApplication.run(ProductrApplication.class, opts);
    }

}

Spring Boot saves us a lot of keystrokes. @SpringBootApplication does a few things we would need for every web application anyway. This annotation is shorthand for the following ones:

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan

Now it is time to start our application for the first time. Thanks to Spring Boot’s maven plugin, which we configured in our pom.xml, starting the application is as easy as: mvn spring-boot:run. Just run this command in your project root directory. You prefer the lazy point-n-click way provided by your IDE? Alright, just instruct your favourite IDE to run ProductrApplication.

Once it is started, use a browser, a REST client (you should check out Postman, I love this tool) or a command line tool like curl. The address you are looking for is: http://localhost:8080/api/products/. So, with curl, the command looks like this:


curl http://localhost:8080/api/products/

Data please

Ok, returning an empty list isn’t that shiny, is it? So let’s bring in data.
In many projects a classic relational database is usually overkill (and painful if you have to use it AND scale out). This may be one reason for the hype around NoSQL databases. One (in my opinion good) example is MongoDB.

Getting MongoDB up and running is pretty easy. On Linux you can use your package manager to install it. For Debian / Ubuntu, for example, simply do: sudo apt-get install mongodb.

For Mac, the easiest way is homebrew: brew install mongodb and follow the instructions in the “Caveats” section.

Windows users should go with the MongoDB installer (and toi toi toi).

Alright, we just got our data store sorted. It is about time to use it.
There is one particular Spring project dealing with data – called Spring Data. And by sheer coincidence a sub-project called Spring Data MongoDB is just waiting for us. Even more, Spring Boot provides a dependency package to get up to speed instantly. No wonder that the following few lines in the pom.xml‘s <dependencies> section are enough to bring in everything we need:


  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
  </dependency>

Now, create a new package called demo.domain and put in a new interface called ProductRepository. Spring provides a pretty neat way to get rid of writing code which is usually needed to interact with a data source. Most of the basic queries are generated by Spring Data – all you need is to define an interface. A couple of query methods are available without even specifying method headers. One example is the findAll() method, which will return all entries in the collection.
But hey, let’s see it in action instead of talking about it. The bespoke ProductRepository interface should look like this:

package demo.domain;

import demo.model.Product;
import org.springframework.data.mongodb.repository.MongoRepository;

/**
 * This interface lets Spring generate a whole Repository implementation for
 * Products.
 */
public interface ProductRepository extends MongoRepository {

}

Next, create a class called ProductService in the same package. Purpose of this class is to actually provide some useful methods to query products. For now, the code is as easy as this:

package demo.domain;

import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;

/**
 * This is a little service class we will let Spring inject later.
 */
@Service
public class ProductService {

    @Autowired
    private ProductRepository repository;

    public List getProducts() {
        return repository.findAll();
    }

}

See how we can use repository.findAll() without even defining it in the interface? Pretty slick, isn’t it? Especially if you are in a hurry and need to get things up quickly.

Alright, so far we prepared the foundation for the data access. I think it is time to wire it together. To do so, simply head back to our class demo.controller.ProductsController and modify it slightly. All we have to do is to inject our shiny new ProductService service and call its getProducts() method. The class will look like this afterwards:

package demo.controller;

import demo.domain.ProductService;
import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

/**
 * This controller provides the REST methods
 */
@RestController
@RequestMapping("/api/products/")
public class ProductsController {

    // Let Spring DI inject the service for us
    @Autowired
    private ProductService productService;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        // Ask the data store for a list of products
        return productService.getProducts();
    }

}

That’s it. Start MongoDB (if not already running), start our application again (remember the mvn spring-boot:run thingy?!) and start another GET request to http://localhost:8080/api/products/:


$ curl http://localhost:8080/api/products/
[]

Wait, still an empty list? Yes, or do you remember us putting anything into the database? Let’s change this by using the following command:


mongo localhost/test --eval "db.product.insert({productId: 'a1234', name: 'Our First Product', vendor: 'ACME'})"

This adds one product called “Our First Product” to our database. Ok, so what is our service returning now? This:

$ curl http://localhost:8080/api/products/
[{"productId":"5657654426ed9d921affc3c0","name":"Our First Product","vendor":"ACME"}]

Easy, wasn’t it?!

Looking for a little more data but no time to create it yourself? Alright, it’s nearly Christmas, so take my little test selection:

curl https://gist.githubusercontent.com/daincredibleholg/f8667a26ce2f17776903/raw/ed9b4c8ec6c9c455dc063e833af2418648928ba6/quick-web-app-product-example.json | mongoimport -d test -c product --jsonArray

Basic requirements at your fingertips

In today’s hectic days and with “microservice” culture spreading, it is getting harder and harder to keep an eye on what is really running on your servers or cloud environments. So in nearly all environments I was working on over the last years monitoring was a big thing. One common pattern is to provide health check endpoints. One can find everything from simple ping endpoints to health metrics, returning a detailed overview of business relevant metrics.
All of this is most of the times a copy-n-paste adventure and involves tackling a lot of boilerplate code. Here is what we have to do – simply add the following dependency to your pom.xml:


  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>

and restart the service. Let’s have a look what happens if we query http://localhost:8080/health:


$ curl http://localhost:8080/health
{"status":"UP","diskSpace":{"status":"UP","total":499088621568,"free":83261571072,"threshold":10485760},"mongo":{"status":"UP","version":"3.0.7"}}

This should provide sufficient data for a basic health check. If you follow the startup log messages, you’ll probably spotted a number of other endpoints. Experiment a bit and check the Actuator documentation for more information.

Show it to me

Ok, we got ourselves a REST service and some data. But we want to show this data to our users. So let’s go on and provide a page with an overview of our awesome products.

Thank Santa that there is a really active web UI community working on loads of nice and easy usable frontend frameworks and libraries. One pretty popular example is Bootstrap. It is easy to use and all the needed bits and pieces are provided via open CDNs.

We want to have a short overview of our products, so a table view would be nice. Bootstrap Table will help us with that. It is built on top of Bootstrap and also available via CDNs. What a world we live in…

But wait, where to put our HTML file? Spring Boot makes it easy, again. Just create a folder called src/main/resources/static and create a new HTML file called index.html with the following content:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">

    <title>Productr</title>

    <!-- Import Bootstrap CSS from CDNs -->
    <link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
    <link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/bootstrap-table/1.9.1/bootstrap-table.min.css">
</head>
<body>
<nav class="navbar navbar-inverse">
    <div class="container">
        <div class="navbar-header">
            <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
                <span class="sr-only">Toggle navigation</span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
            </button>
            <a class="navbar-brand" href="#">Productr</a>
        </div>
        <div id="navbar" class="collapse navbar-collapse">
            <ul class="nav navbar-nav">
                <li class="active"><a href="#">Home</a></li>
                <li><a href="#about">About</a></li>
                <li><a href="#contact">Contact</a></li>
            </ul>
        </div><!--/.nav-collapse -->
    </div>
</nav>
    <div class="container">
        <table data-toggle="table" data-url="/api/products/">
            <thead>
            <tr>
                <th data-field="productId">Product Reference</th>
                <th data-field="name">Name</th>
                <th data-field="vendor">Vendor</th>
            </tr>
            </thead>
        </table>
    </div>


<!-- Import Bootstrap, Bootstrap Table and JQuery JS from CDNs -->
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/bootstrap-table/1.9.1/bootstrap-table.min.js"></script>
</body>
</html>

This file isn’t pretty complex. It is just a HTML file, which includes the minimised CSS files from the CDNs. If you see a reference like //maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css for the first time, it is not a bad mistake that the protocol (http or https) is missing. A resource referenced that way will be loaded via the same protocol the main page got loaded with. Say, if you use http://localhost:8080/, it will use http: to load the CSS files.

The <body> block contains a navigation bar (using the HTML5 <nav> tag) and a table. The interesting part of this table definition is the provided data-url attribute. It is interpreted by Bootstrap Table to load the data. Our definition points to our previously created REST endpoint.
Which part of our JSON objects is used in which column is defined via the data-field attributes on the <th> definitions. Can you spot the matching attribute names?

Last but not least we load the needed JavaScript libraries. All Bootstrap-related JavaScript functionality needs JQuery, so this is the first library to load. Followed straight by the main Bootstrap and the Bootstrap Table JavaScript files. Each of these library files is loaded in the minimised version, to keep download times at a minimum.

Where to go now

It is fair to say that we have a really simple web application now. Well, the main purpose of this article was to show you how to get up to speed with as little code as possible. You’ve seen that sometimes just a dependency in your POM file brings you a complete new feature, without the need of any additional line of code.
Take a step back, look at what we’ve built so far and think about the next steps needed. And just start to take a look around in the Spring universe.

I think one of the most crucial steps needed next, beside adding the missing tests, is to bring in security. Check out Spring Security and its subprojects Spring Security OAuth.
More interested in “classic” web pages? Check out Spring MVC and how easy it is to integrate quite sophisticated template engines (e. g. by following this guide).

Hopefully, you enjoyed this article as much as I enjoyed its creation. I wish you all a merry Christmas and if the one or the other wants to get in touch, you can find me e. g. on Twitter, G+ and LinkedIn.