Decorator Design Pattern using lambdas

With the advent of lambdas in Java we now have a new tool to better design our code. Of course the first step is using streams, method references and other neat features introduced in Java 8.

Going forward I think the next step is to revisit the well established Design Patterns and see them through the functional programming lenses. For this purpose I’ll take the Decorator Pattern and implement it using lambdas.

We’ll take an easy and delicious example of the Decorator Pattern: adding toppings to pizza. Here is the standard implementation as suggested by GoF:

First we have the interface that defines our component:

public interface Pizza {
    String bakePizza();

We have a concrete component:

public class BasicPizza implements Pizza {
    public String bakePizza() {
        return "Basic Pizza";

We decide that we have to decorate our component in different ways. We go with Decorator Pattern. This is the abstract decorator:

public abstract class PizzaDecorator implements Pizza {
    private final Pizza pizza;
    protected PizzaDecorator(Pizza pizza) { = pizza;

    public String bakePizza() {
        return pizza.bakePizza();

we provide some concrete decorators for the component:

public class ChickenTikkaPizza extends PizzaDecorator {
    protected ChickenTikkaPizza(Pizza pizza) {

    public String bakePizza() {
        return super.bakePizza() + " with chicken topping";

public class ProsciuttoPizza extends PizzaDecorator {

    protected ProsciuttoPizza(Pizza pizza) {

    public String bakePizza() {
        return super.bakePizza() + " with prosciutto";

and this is the way to use the new structure:

Pizza pizza = new ChickenTikkaPizza(new BasicPizza());
String finishedPizza = pizza.bakePizza();   //Basic Pizza with chicken topping

pizza = new ChickenTikkaPizza(new ProsciuttoPizza(new BasicPizza()));
finishedPizza  = pizza.bakePizza();  //Basic Pizza with prosciutto with chicken topping

we can see that this can get very messy, and it did get very messy if we think about how we handle buffered readers in java:

new DataInputStream(new BufferedInputStream(new FileInputStream(new File("myfile.txt"))))

of course, you can split that in multiple lines, but that won’t solve the messiness, it will just spread it.
Now lets see how we can do the same thing using lambdas.
We start with the same basic component objects:

public interface Pizza {
    String bakePizza();

public class BasicPizza implements Pizza {
    public String bakePizza() {
        return "Basic Pizza";

But now instead of declaring an abstract class that will provide the template for decorations, we will create the decorator that asks the user for functions that will decorate the component.

public class PizzaDecorator {
    private final Function<Pizza, Pizza> toppings;

    private PizzaDecorator(Function<Pizza, Pizza>... desiredToppings) {
        this.toppings = Stream.of(desiredToppings)
                .reduce(Function.identity(), Function::andThen);


    public static String bakePizza(Pizza pizza, Function<Pizza, Pizza>... desiredToppings) {
        return new PizzaDecorator(desiredToppings).bakePizza(pizza);

private String bakePizza(Pizza pizza) {
    return this.toppings.apply(pizza).bakePizza();


There is this line that constructs the chain of decorations to be applied:

Stream.of(desiredToppings).reduce(identity(), Function::andThen);

This line of code will take your decorations (which are of Function type) and chain them using andThen. This is the same as

(currentToppings, nextTopping) -> currentToppings.andThen(nextTopping)

and it sure that the functions are called subsequently in the order you provided.
Also Function.identity() is translated to elem -> elem lambda expression.

Ok, now where we’ll we define our decorations? You can add them as static methods in PizzaDecorator or even in the interface:

public interface Pizza {
    String bakePizza();

    static Pizza withChickenTikka(Pizza pizza) {
        return new Pizza() {
            public String bakePizza() {
                return pizza.bakePizza() + " with chicken";

    static Pizza withProsciutto(Pizza pizza) {
        return new Pizza() {
            public String bakePizza() {
                return pizza.bakePizza() + " with prosciutto";

And now, this is how this pattern gets to be used:

String finishedPizza = PizzaDecorator.bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

//And if you static import PizzaDecorator.bakePizza:

String finishedPizza  = bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

As you can see, the code got more clear and more concise, and we didn’t use inheritance to build our decorators.

This is just one of the many design pattern that can be improved using lambdas. There are more features that can be used to improve the rest of them like using partial application (currying) to implement Adapter Pattern.

I hope I got you thinking about adopting a more functional programming approach to your development style.

UPDATE: Here you can find a video walkthrough of this article, created by my friends from Webucator:

If you want to see more of their tutorials you can visit Webucator site


The decorator example was inspired by Gang of Four – Decorate with Decorator Design Pattern article

The refactoring method was inspired by the following Devoxx 2015 talks (which I recommend watching as they treat the subject at large):
Design Pattern Reloaded by Remi Forax
Design Patterns in the Light of Lambda Expressions by Venkat Subramaniam


Kotlin for Android Developers

We Android Developers have a difficult situation regarding our language limitation. As you may know, current Android development only support Java 6 (with some small improvements from Java 7), so we need to deal every day with a really old language that cuts our productivity and forces us to write tons of boilerplate and fragile code that it’s difficult to read an maintain.

Hopefully, at the end of the day we’re running over a Java Virtual Machine, so technically anything that can be run in a JVM is susceptible of being used to develop Android Apps. There are many languages that generate bytecode a JVM can execute, so some alternatives are starting to become popular these days, and Kotlin is one of them.

What is Kotlin?

Kotlin is a language that runs on the JVM. It’s being created by Jetbrains, the company behind powerful tools such as IntelliJ, one of the most famous IDEs for Java developers.

Kotlin is a really simple language. One of it’s main goals is to provide a powerful language with a simple and reduced syntax. Some of it’s features are:

  • It’s lightweight: this point is very important for Android. The library we need to add to our projects is as small as possible. In Android we have hard restrictions regarding method count, and Kotlin only adds around 6000 extra methods.
  • It’s interoperable: Kotlin is able to communicate with Java language seamlessly. This means we can use any existing Java library in our Kotlin code, so even though the language is young, we already have thousands of libraries we can work with. Besides, Kotlin code can also be used from Java code, which means we can create software that uses both languages. You can start writing new features in Kotlin and keep the rest of codebase in Java.
  • It’s a strongly-typed language: though you barely need to specify any types throughout the code, because the compiler is able to infer the type of variables or the return types of the functions in almost every situations. So you get the best of both worlds: a concise and safe language.
  • It’s null safe: One of the biggest problems of Java is null. You can’t specify when a variable or parameter can be null, so lots of NullPointerException will happen, and they are really hard to detect while coding. Kotlin uses explicit nullity, which will force us check nulls when necessary.

Kotlin is currently in version 1.0.0 Beta 3, but can expect the final version very soon. It’s quite ready for production anyway, there are already many companies successfully using it.

Why Kotlin is great for Android?

Basically because all its features fit perfectly well in the Android ecosystem. The library is small enough to let us work without proguard during development. It’s size is equivalent to support-v4 library, and there are some other libraries we use in amost every projects that are even bigger.

Besides, Android Studio (the official Android IDE) is built over IntelliJ. This means our IDE have an excellent support to work with this language. We can configure our project in seconds and keep using the IDE as we are used to do. We can keep using Gradle and all the run and debug features the IDE provides. It’s literally the same as writing the App in Java.

And obviously, thanks to its interoperability, we can use the Android SDK without any problems from Kotlin code. In fact, some parts of the SDK are even easier to use, because the interoperability is intelligent, and it for instance maps getters and setters to Kotlin properties, or let us write listeners as closures.

How to start using Kotlin in Android

It’s really easy. Just follow these steps:

  • Download Kotlin plugin from the IDE plugins sections
  • Create a Kotlin class in your module
  • Use the action “Configure Kotlin in Project…”
  • Enjoy

Some features

Kotlin has a lot of awesome features I won’t be able to explain here today. If you want to continue learning about it, you can check my blog and read my book. But today I’ll explain some interesting stuff I hope it makes you want more.

Null safety

As I mentioned before, Kotlin is null safe. If a type can be null we need to specify it by setting an ? after the type. From that point, every time we want to use a variable that uses that type, we need to check nullity.

For instance, this code won’t compile:

var artist: Artist? = null


The second line will show an error, because the nullity wasn’t checked. We could do something like this:

if (artist != null) {



This shows another great Kotlin feature: Smart casting. If we’ve checked the type of a variable, we don’t need to cast it inside the scope of that check. So we now can use artist as variable of type Artist inside the if. This works with any other check we may do (like after checking the instance type).

We have a simpler way to check nullity, by using ? before calling a function of the object. And we can even provide an alternative by using the Elvis operator ?:

val name = artist?.name ?: ""

Data classes

In Java, if we want to create a data class, or POJO class (a class that only saves some state), we’d need to create a class with lots fields, getters and setters, and probably a toString and an equals class:

public class Artist {
    private long id;
    private String name;
    private String url;
    private String mbid;

    public long getId() {
        return id;

    public void setId(long id) { = id;

    public String getName() {
        return name;

    public void setName(String name) { = name;

    public String getUrl() {
        return url;

    public void setUrl(String url) {
        this.url = url;

    public String getMbid() {
        return mbid;

    public void setMbid(String mbid) {
        this.mbid = mbid;

    @Override public String toString() {
        return "Artist{" +
                "id=" + id +
                ", name='" + name + '\'' +
                ", url='" + url + '\'' +
                ", mbid='" + mbid + '\'' +

In Kotlin, all the previous code can be substituted by this:

data class Artist (

    var id: Long,
    var name: String,
    var url: String,
    var mbid: String)

Kotlin uses properties instead of fields. A property is basically a field plus its getter and setter. We can declare those properties directly in the constructor, that you can see is defined right after the name of the class, saving us some lines if we are not modifying the entry values.

The data modifier provides some extra features: a readable toString(), an equals() based on the properties defined in the constructor, a copy function, and even a set of component functions that let us split an object into variables. Something like this:

val (id, name, url, mbid) = artist


We have some great interoperability features that help a lot in Android. One of them is the mapping of interfaces with a single method to a lambda. So a click listener like this one:

view.setOnClickListener(object : View.OnClickListener {
    override fun onClick(v: View) {



can be converted into this:

view.setOnClickListener { toast("Click") }

Besides, getters and setters are mapped automatically to properties. This doesn’t add any kind of overhead, because the bytecode will in fact just call to the original getters and setters. These are some examples:

supportActionBar.title = title
textView.text = title
contactsList.adapter = ContactsAdapter()


Lambdas will save tons of code, but the important thing is that it will let us do things that are impossible (or too verbose) without them. With them we can start thinking in a more functional way. A lambda is simply a way to specify a type that defines a function. We can for instance define a variable like this:

val listener: (View) -> Boolean

This is a variable that is able to declare a function that receives a view and returns a function. A closure is the way we have to define what the function will do:

val listener = { view: View -> view is TextView }

The previous function will get a View and return true if the view is an instance of TextView. Ad the compiler is able to infer the type, we don’t need to specify it. We can be more explicit if we want by the way:

val listener: (View) -> Boolean = { view -> view is TextView }

With lambdas, we can prevent the use of callback interfaces. We can just set the function we want to be called after and operation finishes:

fun asyncOperation(value: Int, callback: (Boolean) -> Unit) {


asyncOperation(5) { result -> println("result: $result") }

But there is a simpler alternative, because if a function only has one parameter, we can use the reserved word it:

asyncOperation(5) { println("result: $it") }


Collections in Kotlin are really powerful. They are written over Java collections, so it means when we get a result from any Java library (or the Android SDK for instance), we still be able to use all the functions Kotlin provides.

The available collections we have are:

  • Iterable
  • Collection
  • List
  • Set
  • Map

And we can apply a lot of operations to them. These are a few of them:

  • filter
  • sort
  • map
  • zip
  • dropWhile
  • first
  • firstOrNull
  • last
  • lastOrNull
  • fold

You may see the complete set of operations in this article. So a complex operation such as a filters, a sort and a transformation can be quite explicitly defined:

    .filter { != null && it.image != null }
    .sortedBy { }
    .map { Contact(,!!, it.image!!) }

We can define new immutable lists in a simple way:

val list = listOf(1, 2, 3, 4, 5)

Or if we want it to be mutable (we can add and remove items), we have a very nice way to access and modify the items, the same way we’d do with an array:

mutableList[0] = 1
val first = mutableList[0]

And the same thing with maps:

map["key"] = 1
val value = map["key"]

This is possible because we can overload some basic operators when implementing our own classes.

Extension functions

Extensions functions will let us add extra behaviour to classes we can’t modify, because they belong to a library or an SDK for instance.

We could create an inflate() function for ViewGroup class:

fun ViewGroup.inflate(layoutRes: Int): View {
    return LayoutInflater.from(context).inflate(layoutRes, this, false)

And from now on, we can just use it as any other method:

val v = parent.inflate(R.layout.view_item)

Or even a loadUrl function to an ImageView. We can make use of Picasso library inside the function:

fun ImageView.loadUrl(url: String) {

All ImageViews can use this function now:



Interfaces in Kotlin can contain code, which simulates a simple multiple inheritance. A class can be composed by the code of many classes, not just a parent. The interfaces can’t, however, keep state. So if we define a property in an interface, the class that implements it must override that property and provide a value.

An example could be a ToolbarManager class that will deal with the Toolbar:

interface ToolbarManager {

    val toolbar: Toolbar

    fun initToolbar() {
        toolbar.setOnMenuItemClickListener {
            when (it.itemId) {
       -> App.instance.toast("Settings")
                else -> App.instance.toast("Unknown option")

This interface can be used by all the activities or fragments that use a Toolbar:

class MainActivity : AppCompatActivity(), ToolbarManager {

    override val toolbar by lazy { find<Toolbar>( }

    override fun onCreate(savedInstanceState: Bundle?) {

When expression

When is the alternative to switch in Java, but much more powerful. It can literally check anything. A simple example:

val cost = when(x) {
    in 1..10 -> "cheap"
    in 10..100 -> "regular"
    in 100..1000 -> "expensive"
    in specialValues -> "special value!"
    else -> "not rated"

We can check that a number is inside a range, or even inside a collection (specialValues is a list). But if we don’t set the parameter to when, we can just check whatever we need. Something as crazy as this would be possible:

val res = when {
    x in 1..10 -> "cheap"
    s.contains("hello") -> "it's a welcome!"
    v is ViewGroup -> "child count: ${v.getChildCount()}"
    else -> ""

Kotlin Android Extensions

Another tool the Kotlin team provides for Android developers. It will be able to read an XML and inject a set of properties into an activity, fragment or view with the views inside the layout casted to its proper type.

If we have this layout:




We just need to add this synthetic import:


And from that moment, we can use the views in our Activity:

override fun onCreate(savedInstanceState: Bundle?) {
    welcomeText.setText("I´m a welcome text!!")

It’s that simple.


Anko is a library the Kotlin team is developing to simplify Android development. It’s main goal is to provide a DSL to declare views using Kotlin code:

verticalLayout {
    val name = editText()
    button("Say Hello") {
        onClick { toast("Hello, ${name.text}!") }

But it includes many other useful things. For instance, a great way to navigate to other activities:

startActivity<DetailActivity>("id" to, "name" to

It just receives a set of Pairs an adds them to a bundle when creating the intent to navigate to the activity (specified as the type of the function).

We also have direct access to system services:


Or easy ways to create toasts and alerts:

longToast("Wow, such a duration")

alert("Yes /no Alert") {
    positiveButton("Yes") { submit() }
    negativeButton("No") {}

And one I love, an simple easy DSL to deal with asynchrony:

async {
    val result = longRequest()
    uiThread { bindForecast(result) }

It also provides a good set of tools to work with SQLite and cursors. The ManagedSQLiteOpenHelper provides a use method which will receive the database and can call directly to its functions:

dbHelper.use {
    select("TABLE_NAME").where("_id = {id}", "id" to 20)

As you can see, it has a nice select DSL, but also a simple create function:

db.createTable("TABLE_NAME", true,
        "_id" to INTEGER + PRIMARY_KEY,
        "name" to TEXT)

When you are dealing with a cursor, you can make use of some extension functions such as parseList, parseOpt or parseClass, that will help with parsing the result.


As you can see, Kotlin simplifies Android development in many different points. It will boost your productivity and will let you solve usual problems in a very different and simpler way.

My recommendation is that you at least try it and play a little with it. It’s a really fun language and very easy to learn. If you think this language is for you, you may continue learning it by getting Kotlin for Android Developers book.

Composing Multiple Async Results via an Applicative Builder in Java 8

A few months ago, I put out a publication where I explain in detail an abstraction I came up with named Outcome, which helped me A LOT to code without side-effects by enforcing the use of semantics. By following this simple (and yet powerful) convention, I ended up turning any kind of failure (a.k.a. Exception) into an explicit result from a function, making everything much easier to reason about. I don’t know you but I was tired of dealing with exceptions that teared everything down, so I did something about it, and to be honest, it worked really well. So before I keep going with my tales from the trenches, I really recommend going over that post. Now let’s solve some asynchronous issues by using eccentric applicative ideas, shall we?

Something wicked this way comes

Life was real good, our coding was fast-paced,  cleaner and composable as ever, but, out of the blue, we stumble upon a “missing” feature (evil laughs please): we needed to combine several asynchronous Outcome instances in a non-blocking fashion….


Excited by the idea, I got down to work. I experimented for a fair amount of time seeking for a robust and yet simple way of expressing these kind of situations; while the new ComposableFuture API turned out to be much nicer that I expected (though I still don’t understand why they decided to use names like applyAsync  or thenComposeAsync instead of map or flatMap), I always ended up with implementations too verbose and repetitive comparing to some stuff I did with Scala, but after some long “Mate” sessions, I had my “Hey! moment”: Why not using something similar to an applicative?

The problem

Suppose that we have these two asynchronous results

and a silly entity called Message

I need something that given textf and numberf it will give me back something like

//After combining textf and numberf
CompletableFuture<Outcome<Message>> message = ....

So I wrote a letter to Santa Claus:

  1. I want to asynchronously format the string returned by textf using the number returned by numberf only when both values are available, meaning that both futures completed successfully and none of the outcomes did fail. Of course, we need to be non-blocking.
  2. In case of failures, I want to collect all failures that took place during the execution of textf and/or numberf and return them to the caller, again, without blocking at all.
  3. I don’t want to be constrained by the number of values to be combined,  it must be capable of handling a fair amount of asynchronous results. Did I say without blocking? There you go…
  4. Not die during the attempt.


Applicative  builder to the rescue

If you think about it, one simple way to put what we’re trying to achieve is as follows:

// Given a String -> Given a number -> Format the message
f: String -> Integer -> Message

Checking the definition of  f, it is saying something like: “Given a String, I will return a function that takes an Integer as parameter, that when applied, will return an instance of type Message“, this way, instead of waiting for all values to be available at once, we can partially apply one value at a time, getting an actual description of the construction process of a Message instance. That sounded great.

To achieve that, it would be really awesome if we could take the construction lambda Message:new and curry it, boom!, done!, but in Java that’s impossible (to do in a generic, beautiful and concise way), so for the sake of our example, I decided to go with our beloved Builder pattern, which kinda does the job:

And here’s the WannabeApplicative<T> definition

public interface WannabeApplicative<V>
    V apply();

Disclamer: For those functional freaks out there, this is not an applicative per se, I’m aware of that, but I took some ideas from it an adapted them according to the tools that the language offered me out of the box. So, if you’re feeling curious, go check this post for a more formal example.

If you’re still with me, we could agree that we’ve done nothing too complicated so far, but now we need to express a building step, which, remember, needs to be non-blocking and capable to combine any previous failure that might have took place in other executions with potentially new ones. So, in order to do that, I came up with something as follows:

First of all, we’ve got two functional interfaces: one is Partial<B>, which represents a lazy application of a value to a builder, and the second one, MergingStage<B,V>, represents the “how” to combine both the builder and the value. Then, we’ve got a method called value that, given an instance of type CompletableFuture<Outcome<V>>, it will return an instance of type MergingStage<B,V>, and believe or not, here’s where the magic takes place. If you remember the MergingState definition, you’ll see it’s a BiFunction, where the first parameter is of type Outcome<B> and the second one is of type Outcome<V>. Now, if you follow the types, you can tell that we’ve got two things: the partial state of the building process on one side (type parameter B)  and a new value that need to be applied to the current state of the builder (type parameter V), so that, when applied, it will generate a new builder instance with the “next state in the building sequence”, which is represented by Partial<B>. Last but not least, we’ve got the stickedTo method, which basically is a (awful java) hack to stick to a specific applicative type (builder) while defining building step. For instance, having:

I can define partial value applications to any Builder instance as follows:

See that we haven’t built anything yet, we just described what we want to do with each value when the time comes, we might want to perform some validations before using the new value (here’s when Outcome plays an important role) or just use it as it is, it’s really up to us, but the main point is that we haven’t applied anything yet. In order to do so, and to finally tight up all loose ends, I came up with some other definition, which looks as follows:

Hope it’s not that overwhelming, but I’ll try to break it down as clearer as possible. In order to start specifying how you’re going to combine the whole thing together, you will start by calling begin with an instance of type WannabeApplicative<V>, which, in our case, type parameter V is equal to Builder.

FutureCompositions<Message, Builder> ab = begin(Message.applicative())

See that, after you invoke begin, you will get a new instance of FutureCompositions with a lazily evaluated partial state inside of it, making it the one and only owner of the whole building process state, and that was the ultimate goal of everything we’ve done so far, to fully gain control over when and how things will be combined. Next, we must specify the values that we want to combine, and that’s what the binding method is for:


This is how we supply our builder instance with all the values that need to be merged together along with the specification of what’s supposed to happen with each one of them, by using our previously defined Partial instances. Also see that everything’s still lazy evaluated, nothing has happened yet, but still we stacked all “steps” until we finally decide to materialize the result, which will happen when you call perform.

CompletableFuture<Outcome<Message>> message = ab.perform();

From that very moment everything will unfold,  each building stage will get evaluated, where failures could be returned and collected within an Outcome instance or simply the newly available values will be supplied to the target builder instance, one way or the other, all steps will be executed until nothing’s to be done. I will try to depict what just happened as follows


If you pay attention to the left side of the picture, you can easily see how each step gets “defined” as I showed before, following the previous “declaration” arrow direction, meaning, how you actually described the building process. Now, from the moment that you call perform, each applicative instance (remember Builder in our case) will be lazily evaluated in the opposite direction:  it will start by evaluating the last specified stage in the stack, which will then proceed to evaluate the next one and so forth up to the point where we reach the “beginning” of the building definition, where it will start to unfold o roll out evaluation each step up to the top, collecting everything  it can by using the MergingStage specification.

And this is just the beginning….

I’m sure a lot could be done to improve this idea, for example:

  • The two consecutive calls to dependingOn at CompositionSources.values() sucks, too verbose to my taste, I must do something about it.
  • I’m not quite sure to keep passing Outcome instances to a MergingStage, it would look cleaner and easier if we unwrap the values to be merged before invoking it and just return Either<Failure,V> instead – this will reduce complexity and increase flexibility on what’s supposed to happen behind the scenes.
  • Though using the Builder pattern did the job, it feels old-school, I would love to easily curry constructors, so in my to-do list is to check if jOOλ or Javaslang have something to offer on that matter.
  • Better type inference so that the any unnecessary noise gets remove from the code, for example, the stickedTo method, it really is a code smell, something that I hated from the first place. Definitely need more time to figure out an alternative way to infer the applicative type from the definition itself.

You’re more than welcome to send me any suggestions and comments you might have. Cheers and remember…..





Reactive Development Using Vert.x

Lately, it seems like we’re hearing about the latest and greatest frameworks for Java. Tools like Ninja, SparkJava, and Play; but each one is opinionated and make you feel like you need to redesign your entire application to make use of their wonderful features. That’s why I was so relieved when I discovered Vert.x. Vert.x isn’t a framework, it’s a toolkit and it’s un-opinionated and it’s liberating. Vert.x doesn’t want you to redesign your entire application to make use of it, it just wants to make your life easier. Can you write your entire application in Vert.x? Sure! Can you add Vert.x capabilities to your existing Spring/Guice/CDI applications? Yep! Can you use Vert.x inside of your existing JavaEE applications? Absolutely! And that’s what makes it amazing.


Vert.x was born when Tim Fox decided that he liked a lot of what was being developed in the NodeJS ecosystem, but he didn’t like some of the trade-offs of working in V8: Single-threadedness, limited library support, and JavaScript itself. Tim set out to write a toolkit which was unopinionated about how and where it is used, and he decided that the best place to implement it was on the JVM. So, Tim and the community set out to create an event-driven, non-blocking, reactive toolkit which in many ways mirrored what could be done in NodeJS, but also took advantage of the power available inside of the JVM. Node.x was born and it later progressed to become Vert.x.


Vert.x is designed to implement an event bus which is how different parts of the application can communicate in a non-blocking/thread safe manner. Parts of it were modeled after the Actor methodology exhibited by Eralng and Akka. It is also designed to take full advantage of today’s multi-core processors and highly concurrent programming demands. As such, by default, all Vert.x VERTICLES are implemented as single-threaded by default. Unlike NodeJS though, Vert.x can run MANY verticles in MANY threads. Additionally, you can specify that some verticles are “worker” verticles and CAN be multi-threaded. And to really add some icing on the cake, Vert.x has low level support for multi-node clustering of the event bus via the use of Hazelcast. It has gone on to include many other amazing features which are too numerous to list here, but you can read more in the official Vert.x docs.

The first thing you need to know about Vert.x is, similar to NodeJS, never block the current thread. Everything in Vert.x is set up, by default, to use callbacks/futures/promises. Instead of doing synchronous operations, Vert.x provides async methods for doing most I/O and processor intensive operations which might block the current thread. Now, callbacks can be ugly and painful to work with, so Vert.x optionally provides an API based on RxJava which implements the same functionality using the Observer pattern. Finally, Vert.x makes it easy to use your existing classes and methods by providing the executeBlocking(Function f) method on many of it’s asynchronous APIs. This means you can choose how you prefer to work with Vert.x instead of the toolkit dictating to you how it must be used.

The second thing to know about Vert.x is that it composed of verticles, modules, and nodes. Verticles are the smallest unit of logic in Vert.x, and are usually represented by a single class. Verticles should be simple and single-purpose following the UNIX Philosophy. A group of verticles can be put together into a module, which is usually packaged as a single JAR file. A module represents a group of related functionality which when taken together could represent an entire application or just a portion of a larger distributed application. Lastly, nodes are single instances of the JVM which are running one or more modules/verticles. Because Vert.x has clustering built-in from the ground up, Vert.x applications can span nodes either on a single machine or across multiple machines in multiple geographic locations (though latency can hider performance).

Example Project

Now, I’ve been to a number of Meetups and conferences lately where the first thing they show you when talking about reactive programming is to build a chat room application. That’s all well and good, but it doesn’t really help you to completely understand the power of reactive development. Chat room apps are simple and simplistic. We can do better. In this tutorial, we’re going to take a legacy Spring application and convert it to take advantage of Vert.x. This has multiple purposes: It shows that the toolkit is easy to integrate with existing Java projects, it allows us to take advantage of existing tools which may be entrenched parts of our ecosystem, and it also lets us follow the DRY principle in that we don’t have to rewrite large swathes of code to get the benefits of Vert.x.

Our legacy Spring application is a contrived simple example of a REST API using Spring Boot, Spring Data JPA, and Spring REST. The source code can be found in the “master” branch HERE. There are other branches which we will use to demonstrate the progression as we go, so it should be simple for anyone with a little experience with git and Java 8 to follow along. Let’s start by examining the Spring Configuration class for the stock Spring application.

public class Application {
    public static void main(String[] args) {
        ApplicationContext ctx =, args);

        System.out.println("Let's inspect the beans provided by Spring Boot:");

        String[] beanNames = ctx.getBeanDefinitionNames();
        for (String beanName : beanNames) {

    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();

    public EntityManagerFactory entityManagerFactory() {
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();

        LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();

        return factory.getObject();

    public PlatformTransactionManager transactionManager(final EntityManagerFactory emf) {
        final JpaTransactionManager txManager = new JpaTransactionManager();
        return txManager;

As you can see at the top of the class, we have some pretty standard Spring Boot annotations. You’ll also see an @Slf4j annotation which is part of the lombok library, which is designed to help reduce boiler-plate code. We also have @Bean annotated methods for providing access to the JPA EntityManager, the TransactionManager, and DataSource. Each of these items provide injectable objects for the other classes to use. The remaining classes in the project are similarly simplistic. There is a Customer POJO which is the Entity type used in the service. There is a CustomerDAO which is created via Spring Data. Finally, there is a CustomerEndpoints class which is the JAX-RS annotated REST controller.

As explained earlier, this is all standard fare in a Spring Boot application. The problem with this application is that for the most part, it has limited scalability. You would either run this application inside of a Servlet container, or with an embedded server like Jetty or Undertow. Either way, each requests ties up a thread and is thus wasting resources when it waits for I/O operations.

Switching over to the Convert-To-Vert.x-Web branch, we can see that the Application class has changed a little. We now have some new @Bean annotated methods for injecting the Vertx instance itself, as well as an instance of ObjectMapper (part of the Jackson JSON library). We have also replaced the CustomerEnpoints class with a new CustomerVerticle. Pretty much everything else is the same.

The CustomerVerticle class is annotated with @Component, which means that Spring will instantiate that class on startup. It also has it’s start method annotated with @PostConstruct so that the Verticle is launched on startup. Looking at the actual content of the code, we see our first bits of Vert.x code: Router.

The Router class is part of the vertx-web library and allows us to use a fluent API to define HTTP URLs, methods, and header filters for our request handling. Adding the BodyHandler instance to the default route allows a POST/PUT body to be processed and converted to a JSON object which Vert.x can then process as part of the RoutingContext. The order of routes in Vert.x CAN be significant. If you define a route which has some sort of glob matching (* or regex), it can swallow requests for routes defined after it unless you implement chaining. Our example shows 3 routes initially.

    public void start() throws Exception {
        Router router = Router.router(vertx);

Notice that the HTTP method is defined, the “Accept” header is defined (via consumes), and the “Content-Type” header is defined (via produces). We also see that we are passing the handling of the request off via a call to the blockingHandler method. A blocking handler for a Vert.x route accepts a RoutingContext object as it’s only parameter. The RoutingContext holds the Vert.x Request object, Response object, and any parameters/POST body data (like “:id”). You’ll also see that I used method references rather than lambdas to insert the logic into the blockingHandler (I find it more readable). Each handler for the 3 request routes is defined in a separate method further down in the class. These methods basically just call the methods on the DAO, serialize or deserialize as needed, set some response headers, and end() the request by sending a response. Overall, pretty simple and straightforward.

    private void addCustomer(RoutingContext rc) {
        try {
            String body = rc.getBodyAsString();
            Customer customer = mapper.readValue(body, Customer.class);
            Customer saved =;
            if (saved!=null) {
            } else {
                rc.response().setStatusMessage("Bad Request").setStatusCode(400).end("Bad Request");
        } catch (IOException e) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", e);

    private void getCustomerById(RoutingContext rc) {"Request for single customer");
        Long id = Long.parseLong(rc.request().getParam("id"));
        try {
            Customer customer = dao.findOne(id);
            if (customer==null) {
                rc.response().setStatusMessage("Not Found").setStatusCode(404).end("Not Found");
            } else {
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);

    private void getAllCustomers(RoutingContext rc) {"Request for all customers");
        List customers =, false).collect(Collectors.toList());
        try {
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);

“But this is more code and messier than my Spring annotations and classes”, you might say. That CAN be true, but it really depends on how you implement the code. This is meant to be an introductory example, so I left the code very simple and easy to follow. I COULD use an annotation library for Vert.x to implement the endpoints in a manner similar to JAX-RS. In addition, we have gained a massive scalability improvement. Under the hood, Vert.x Web uses Netty for low-level asynchronous I/O operations, thus providing us the ability to handle MANY more concurrent requests (limited by the size of the database connection pool).

We’ve already made some improvement to the scalability and concurrency of this application by using the Vert.x Web library, but we can improve things a little more by implementing the Vert.x EventBus. By separating the database operations into Worker Verticles instead of using blockingHandler, we can handle request processing more efficiently. This is show in the Convert-To-Worker-Verticles branch. The application class has remained the same, but we have changed the CustomerEndpoints class and added a new class called CustomerWorker. In addition, we added a new library called Spring Vert.x Extension which provides Spring Dependency Injections support to Vert.x Verticles. Start off by looking at the new CustomerEndpoints class.

    public void start() throws Exception {"Successfully create CustomerVerticle");
        DeploymentOptions deployOpts = new DeploymentOptions().setWorker(true).setMultiThreaded(true).setInstances(4);
        vertx.deployVerticle("java-spring:com.zanclus.verticles.CustomerWorker", deployOpts, res -> {
            if (res.succeeded()) {
                Router router = Router.router(vertx);
                final DeliveryOptions opts = new DeliveryOptions()
                        .handler(rc -> {
                            opts.addHeader("method", "getCustomer")
                                    .addHeader("id", rc.request().getParam("id"));
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        .handler(rc -> {
                            opts.addHeader("method", "addCustomer");
                            vertx.eventBus().send("com.zanclus.customer", rc.getBodyAsJson(), opts, reply -> handleReply(reply, rc));
                        .handler(rc -> {
                            opts.addHeader("method", "getAllCustomers");
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
            } else {
                log.error("Failed to deploy worker verticles.", res.cause());

The routes are the same, but the implementation code is not. Instead of using calls to blockingHandler, we have now implemented proper async handlers which send out events on the event bus. None of the database processing is happening in this Verticle anymore. We have moved the database processing to a Worker Verticle which has multiple instances to handle multiple requests in parallel in a thread-safe manner. We are also registering a callback for when those events are replied to so that we can send the appropriate response to the client making the request. Now, in the CustomerWorker Verticle we have implemented the database logic and error handling.

public void start() throws Exception {

public void handleDatabaseRequest(Message<Object> msg) {
    String method = msg.headers().get("method");

    DeliveryOptions opts = new DeliveryOptions();
    try {
        String retVal;
        switch (method) {
            case "getAllCustomers":
                retVal = mapper.writeValueAsString(dao.findAll());
                msg.reply(retVal, opts);
            case "getCustomer":
                Long id = Long.parseLong(msg.headers().get("id"));
                retVal = mapper.writeValueAsString(dao.findOne(id));
            case "addCustomer":
                retVal = mapper.writeValueAsString(
                                                    ((JsonObject)msg.body()).encode(), Customer.class)));
                log.error("Invalid method '" + method + "'");
                opts.addHeader("error", "Invalid method '" + method + "'");
      , "Invalid method");
    } catch (IOException | NullPointerException e) {
        log.error("Problem parsing JSON data.", e);, e.getLocalizedMessage());

The CustomerWorker worker verticles register a consumer for messages on the event bus. The string which represents the address on the event bus is arbitrary, but it is recommended to use a reverse-tld style naming structure so that it is simple to ensure that the addresses are unique (“com.zanclus.customer”). Whenever a new message is sent to that address, it will be delivered to one, and only one, of the worker verticles. The worker verticle then calls handleDatabaseRequest to do the database work, JSON serialization, and error handling.

There you have it. You’ve seen that Vert.x can be integrated into your legacy applications to improve concurrency and efficiency without having to rewrite the entire application. We could have done something similar with an existing Google Guice or JavaEE CDI application. All of the business logic could remain relatively untouched while we tried in Vert.x to add reactive capabilities. The next steps are up to you. Some ideas for where to go next include Clustering, WebSockets, and VertxRx for ReactiveX sugar.

How jOOQ Allows for Fluent Functional-Relational Interactions in Java 8

In this year’s Java Advent Calendar, we’re thrilled to have been asked to feature a mini-series showing you a couple of advanced and very interesting topics that we’ve been working on when developing jOOQ.

The series consists of:

Don’t miss any of these!

How jOOQ allows for fluent functional-relational interactions in Java 8

In yesterday’s article, we’ve seen How jOOQ Leverages Generic Type Safety in its DSL when constructing SQL statements. Much more interesting than constructing SQL statements, however, is executing them.

Yesterday, we’ve seen a sample PL/SQL block that reads like this:

FOR rec IN (
SELECT first_name, last_name FROM customers
SELECT first_name, last_name FROM staff
INSERT INTO people (first_name, last_name)
VALUES rec.first_name, rec.last_name;

And you won’t be surprised to see that the exact same thing can be written in Java with jOOQ:

for (Record2<String, String> rec :, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
) {
.values(rec.getValue(CUSTOMERS.FIRST_NAME), rec.getValue(CUSTOMERS.LAST_NAME))

This is a classic, imperative-style PL/SQL inspired approach at iterating over result sets and performing actions 1-1.

Java 8 changes everything!

With Java 8, lambdas appeared, and much more importantly, Streams did, and tons of other useful features. The simplest way to migrate the above foreach loop to Java 8’s “callback hell” would be the following, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.forEach(rec -> {
.values(rec.getValue(CUSTOMERS.FIRST_NAME), rec.getValue(CUSTOMERS.LAST_NAME))

This is still very simple. How about this. Let’s fetch a couple of records from the database, stream them, map them using some sophisticated Java function, reduce them into a batch update statement! Whew… here’s the code:

.where(, 3))
.map(book -> book.setTitle(book.getTitle().toUpperCase()))
dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),
(batch, book) -> batch.bind(book.getTitle(), book.getId()),
(b1, b2) -> b1

Awesome, right? Again, with comments

// Here, we simply select a couple of books from the database
.where(, 3))

// Now, we stream the result as a Java 8 Stream

// Now we map all book titles using the "sophisticated" Java function
.map(book -> book.setTitle(book.getTitle().toUpperCase()))

// Now, we reduce the books into a batch update statement...

// ... which is initialised with empty bind variables
dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),

// ... and then we bind each book's values to the batch statement
(batch, book) -> batch.bind(book.getTitle(), book.getId()),

// ... this is just a dummy combiner function, because we only operate on one batch instance
(b1, b2) -> b1

// Finally, we execute the produced batch statement

Awesome, right? Well, if you’re not too functional-ish, you can still resort to the “old ways” using imperative-style loops. Perhaps, your coworkers might prefer that:

BatchBindStep batch = dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),

for (BookRecord book :
.where(, 3))
) {
batch.bind(book.getTitle(), book.getId());


So, what’s the point of using Java 8 with jOOQ?

Java 8 might change a lot of things. Mainly, it changes the way we reason about functional data transformation algorithms. Some of the above ideas might’ve been a bit over the top. But the principal idea is that whatever is your source of data, if you think about that data in terms of Java 8 Streams, you can very easily transform (map) those streams into other types of streams as we did with the books. And nothing keeps you from collecting books that contain changes into batch update statements for batch execution.

Another example is one where we claimed that Java 8 also changes the way we perceive ORMs. ORMs are very stateful, object-oriented things that help manage database state in an object-graph representation with lots of nice features like optimistic locking, dirty checking, and implementations that support long conversations. But they’re quite terrible at data transformation. First off, they’re much much inferior to SQL in terms of data transformation capabilities. This is topped by the fact that object graphs and functional programming don’t really work well either.

With SQL (and thus with jOOQ), you’ll often stay on a flat tuple level. Tuples are extremely easy to transform. The following example shows how you can use an H2 database to query for INFORMATION_SCHEMA meta information such as table names, column names, and data types, collect those information into a data structure, before mapping that data structure into new CREATE TABLE statements:

.fetch() // jOOQ ends here
.stream() // Streams start here
r -> r.getTableName(),
r -> r,
(table, columns) -> {
// Just emit a CREATE TABLE statement
"CREATE TABLE " + table + " (");

// Map each "Column" type into a String
// containing the column specification,
// and join them using comma and
// newline. Done!
.map(col -> " " + col.getName() +
" " + col.getType())


The above statement will produce something like the following SQL script:


That’s data transformation! If you’re as excited as we are, read on in this article how this example works exactly.


Java 8 has changed everything in the Java ecosystem. Finally, we can implement functional, transformative algorithms easily using Streams and lambda expressions. SQL is also a very functional and transformative language. With jOOQ and Java 8, you can extend data transformation directly from your type safe SQL result into Java data structures, back into SQL. These things aren’t possible with JDBC. These things weren’t possible prior to Java 8.

jOOQ is free and Open Source for use with Open Source databases, and it offers commercial licensing for use with commercial databases.

For more information about jOOQ or jOOQ’s DSL API, consider these resources:

Stay tuned for tomorrow’s article “How jOOQ helps pretend that your stored procedures are a part of Java”
This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

A conversational guide for JDK8’s lambdas – a glossary of terms

Last advent…I wrote a post related to the new treats that JDK8 has for us. Most probably the feature that I am the most excited about are the lambdas. I have to admit that in my 1st year of being the prodigal soon (during which I have developed using C#), I loved the LINQ and the nice, elegant things you can do with it. Now, even if erasure is still in the same place where we left it the last time, now we have a better way to filter, alter, walk the collections and beside the syntactic sugar, it might also make you use properly the quad core processor that you bragged about to your friends. And talking about friends this post is a cheatsheet of terms related to lambdas and stream processing that you can throw to your friends when they ask you: “What’s that <place term from JDK 8 related to lambdas>?”. It is not my intent to have a full list or a how-to lambda guide, if I said something wrong or I missed something please let me know…

Functional interface:

According to [jls 8] a functional interface is an interface that has just one abstract method, and thus represents a single function contract (In some cases, this “single” method may take the form of multiple abstract methods with override-equivalent signatures ([jls7_8.4.2]) inherited from superinterfaces; in this case, the inherited methods logically represent a single method).
@FunctionalInterface – is used to indicate that an interface is meant to be a functional interface. If the annotation is placed on an interface that is actually not, a compile time error occurs.
interface Runnable { void run(); }
The Runnable interface is a very appropriate example as the only method present is the run method. Another Java “classic” example of functional interface is the Comparator<T> interface, in the following example is a before mentioned interface and the equals method inherited from Object, the interface is still functional as the compare method is the only abstract method, while the equals is inherited from the superclass.

interface Comparator<T> {
  boolean equals(Object obj);

  int compare(T o1, T o2);



stream - according to [oxford dictionary], in computing it is a continuous flow of data or instructions, typically one having a constant or predictable rate.

Starting with JDK 8 stream represents a mechanism used for conveying elements from a data source, through a computational pipeline. A stream can use as data sources arrays, collections, generator functions, I/O channels.

Obtaining streams:

  • From a Collection via the stream() and/or parallelStream() methods;
  • From an array via[])
  • From static factory methods on the stream classes, such as Stream.of(Object[]), IntStream.range(int, int) or Stream.iterate(Object, UnaryOperator);
  • The lines of a file can be obtained from BufferedReader.lines();
  • Streams of file paths can be obtained from methods in Files;
  • Streams of random numbers can be obtained from Random.ints();
  • Numerous other stream-bearing methods in the JDK, including, Pattern.splitAsStream(java.lang.CharSequence), and

stream operations – actions taken on the stream. From the point of view of stream manipulation there are two types of actions: intermediate and terminal  operations
stream intermediate operation – operations that are narrowing the content of the stream. Intermediate operations are lazy by nature – do not actually alter the content of the stream, but create another narrower stream. The traversal of the stream begins only when the terminal operation is called.
  •  filter – filters the stream based on the provided predicate
  •  map – creates a new stream by applying the mapping function to each element from the initial stream (corresponding methods for each numeric type: int, long, double)
  •  flatMap – operation that has the effect of applying a one-to-many transformation to the elements of the stream, and then flattening the results elements into a new stream. For example, if orders is a stream of purchase orders, and each purchase order contains a collection of line items, then the following produces a stream of line items:
                     orderStream.flatMap(order -> order.getLineItems().stream())
  • distinct – returns a stream of distinct operations
  • sorted – returns a stream of sorted operations
  • peek – debug focused method that returns a stream consisting of elements of this stream, the provided action is performed on each element


     .peek(e -> {System.out.println("Filtered value: " + e); });


     .peek(e -> {System.out.println("Mapped value: " + e); });


  • limit – returns a truncated version of the current stream (no more than the limit number of elements)
  • substream – returns a stream consisting of the remaining element starting from startposition, or between startPosition and endPosition
stream terminal operation – traverse the stream to produce a result or a side-effect. After the execution of the terminal operation, the stream is considered
consumed( calling another operation on a consumed stream will throw an IllegalStateException). Terminal operations are eager in nature, with the exception of iterator() and splititerator() that provide an extension mechanism for those that don’t find the needed function in the API.
  • forEach – applies the provided operation to every element of the stream. Also the forEachOrdered version exists
  • toArray – extracts the elements of the stream into an array
  • reduce  – reduction method
  • collect – mutable reduction method
  • min     – computes the minimum of the stream
  • max     – computes the maximum of the stream
  • count   – counts the elements of the stream
  • anyMatch – returns true if there is an element matching the provided criteria
  • allMatch – returns true if all the elements match
  • noneMatch – returns true if none of the elements match 
  • findFirst – finds the first element that matches the provided condition
  • findAny – returns an element from the stream
stream pipeline: consists of a source, followed by zero or more intermediate operations and a terminal operation.
spliterator – spliterator for traversing and partinioning elements of a source. One can use it for traverse, estimate element count or split it in multiple spliterators
Reduction – a reduction operation (or fold) takes a sequence of input elements and combines them into a single summary result by repeated application
of a combining operation. A reduction operation can be computing the sum, max, min, count or collecting the elements in a list. The reduction operation
is also parallelizable as long as the function(s) used are associative and stateless. The method used for reduction is reduce()
Ex: reduction using sum:
int sum =, (x,y) -> x + y);
int sum =, Integer::sum);
Mutable reduction – is a operation that accumulates input elements into a mutable result container (StringBuilder or Collection) as it processes the elements in
the stream.
Ex: String concatenated = strings.reduce("", String::concat)
Predicate – functional interface that determines whether the input object matches some criteria
I hope you find this condensed list beneficial and you keep it in your bookmarks for those moments when you need all these terms on one page.
If you find something that is missing, please let me know so I can correct it.
So…I wish you a nice Advent Time and a Happy/Jolly/Merry but most important of all I wish you a peaceful Christmas!
 Useful resources:
Most of the information was extracted from [stream]. Particularly useful I found the very comprehensive javadoc of the given classes. 
 [jls7_8.4.2] –

Lambda Related resources on +ManiSarkar‘s recommendation:

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Java – The 2012 Review and Future Predictions

Hi all,
It’s a real privilege to close off the Java Advent calendar for 2012. It’s a wonderful initiative and I (like many of you) eagerly checked my feed every morning for the next addition. A big Thank You! has to go to +Attila-Mihaly Balazs and all of the other authors for rounding off 2012 in some style.
This post will focus on the events big and small that occurred in 2012 and also take a look at some future predictions for 2013. Some of the predictions will be honest guesses, others…. well lets just say that my Diabolical side will have taken over :-).
So without further adieu lets look at the year that was 2012 for Java…..

2012 – A Year in Review

2012 was a rocking year for Java, the JVM and the community. James Governer (RedMonk analyst) stated that “2012 was the dawning of a 2nd age for Java”.

Java enters the cloud (for real this time)

Java/JVM based cloud offerings became a serious reality in 2012 with a host of new PAAS and IAAS offerings. Cloudbees, JElastic, Heroku, Joyent, Oracle are just five of the large number of offerings out there now.

What does that mean for you as a developer? Well, it means lots of choice and the ability to try out this space very cheaply. I highly recommend that you try some of these providers out over the holidays (it takes minutes to set-up a free account) and see what all of the fuss is about.

Counter to this however is a lack of standardisation in this space and although JEE8 promises to change this (assuming the vendors get on board) – for the next few years you’ll need to be careful about being locked into a particular platform. If you’re a bit more serious about having agnostic services/code running on the various offerings then I can recommend looking at the jClouds API to assist you.

It’s fair to say that many of the offerings are still feeling their way in terms of getting the most out of the JVM. In particular multi-tenancy is an issue, as is Garbage Collection and performance on a virtualised environment.  Companies such as Waratek and jClarity (Disclaimer: I’m their CTO) now offer solutions  to alleviate those gaps.

The Java community thrives

The community continues to thrive despite many main stream tech media reports of “developers leaving the Java platform” or “Java is dead”. There are more Java User Groups (JUGs) than ever before, consisting of ~400,000 developers world wide. Notably, one of them, the London Java Community won several awards including the Duke’s Choice award and JCP Member of the Year (along with SouJava – the major Brazilian JUG).

The conference circuit is bursting at the seams with large, sold out in advance, world-class Java conferences such as JFokus, Devoxx and of course JavaOne. In addition to this the host of regional conferences that often pack in an audience of over 1000 people all continued to do well.

Oracle’s Java Magazine was launched and has grown to over 100,000 subscribers. Stalwarts like JaxEnter, Coderanch and the Javaposse continue to grow in audience sizes.


Further OpenJDK reforms happened over 2012 and a new scorecard is now in place for the wider community to give feedback on governance, openness and transparency. 2012 also saw a record number of individuals and organisations joining OpenJDK. In particular, the port to the ARM processor and support for running Java on graphic cards (Project Sumatra) were highlights this year.

Java Community Process (JCP)

The Java Community Process (JCP), Java’s standards body also continued its revival with record numbers of new sign-ups and a hotly contested election. As well as dealing with the important business of trademarks, IP and licensing for Java, a re-focus on the technical aspects for Java Specification Requests (JSRs) occurred. In particular the new Adopt a JSR programme is being strongly supported by the JCP.

Java and the JVM

The JVM continues to improve rapidly through OpenJDK – the number of Java Enhancement Proposals (JEPs) going into Java 8 is enormous. Jigsaw dropping out was a disappointing but given the lack of broader vendor support and the vast amount of technical work required, it was the correct decision.

JEE / Spring

JEE7 is moving along nicely (and will be out soon), bringing Java developers a standard way to deal with the modern web (JSON, Web Sockets, etc). Of course many developers are already using the SpringSource suite of APIs but it’s good to see advancement in the underlying specs.

Rapid Web Development

Java/JVM based rapid web development frameworks are finally gaining the recognition they deserve. Frameworks like JBoss’s SEAM, Spring Roo, Grails, Play etc all give Java developers parity with the Rails and Django crowd.

Mechanical Sympathy

A major focus of 2012 was on Mechanical Sympathy (as coined by Martin Thompson in his blog). The tide has turned, and we now have to contend with having multi-core machines and virtualised O/S’s. Java developers have had to start thinking about how Java and the JVM interacts with the underlying platform and hardware.

Performance companies like jClarity are building tooling to help developers understand this complex space, but it certainly doesn’t hurt to get those hardware manuals off the shelf again!

2013 – Future predictions

It’s always fun to gaze into the crystal ball and here are my predictions for 2013!

Java 8 will get delivered on time

Java 8 with Nashorn, Lambda, plus a port to the ARM processor will open up loads of new opportunities for developers working on the leading edge of web and mobile tech. I anticipate rapid adoption of Java 8 (much faster than 7).

However, the lack of JVMs present on iOS and Android devices will continue to curtail adoption there.

Commercial Java in the cloud

2013 will be the year of commercial Java/JVM in the cloud – many of the kinks will get ironed out with regards to mutli-tenancy and memory management and a rich SAAS ecosystem will start to form.

The organisations that enable enterprises to get their in house Java apps out onto the cloud will be the big commercial winners.

We’ll also see some consolidation in this space as the larger vendors snap up smaller ones that have proven technology.


OpenJDK will continue to truly open up with a public issue tracker based on JIRA, a distributed build farm available to developers and a far superior code review and patch system put in place.

Oracle, IBM and other major vendors have also backed initiatives to bring their in house test suites out into the open, donating them to the project for the good of all. 

JVM languages and polyglot

There will be a resurgence in Groovy thanks to its new static compilation capability and improved IDE tooling. Grails in particular will look like an even more attractive rapid development framework as it will offer decent performance for midrange web apps. 

Scala will continue to be hyped but will only be used successfully by small focused teams.  Clojure will continue to be popular for small niche areas.  Java will still outgrow them all in terms of real numbers and percentage growth.

A random prediction is that JRuby may well entice over Rails developers that are looking to take advantage of the JVM’s performance and scalability.

So that’s it from me, it was an amazing 2012 and I look forward to spending another year working with many of you and watching the rest making dreams into reality!

Martijn (@karianna – CTO of jClarity – aka “The Diabolical Developer”)

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazs to contribute!

Java – far sight look at JDK 8

The world is changing slowly but surely. After the changes that gave java a fresher look with JDK 7, the java community is looking forward to the rest of the improvements that will come with JDK 8 and probably JDK 9. The targeted purpose of JDK 8 was to fill in the gaps in the implementation of JDK 7 – part of the remaining puzzle pieces laking from this implementation, that should be available for the broad audience by in late 2013 is to improve and boost the language in three particular directions:

  • productivity
  • performance
  • modularity

So from next year, java will run everywhere (mobile, cloud, desktop, server etc.), but in an improved manner. In what follows I will provide a short overview of what to expect from 2013 – just in time for New Year’s Resolutions – afterwards I will focus mainly on productivity side with emphasis on project lambda and how will its introduction affect the way we code.


In regards of productivity JDK 8 targets two main areas: – collections – a more facile way to interact with collections through literal extensions brought to the language – annotations – enhanced support for annotations, allowing writting them in contexts where are currently illegal (e.g. primitives)


The addition of the Fork/Join framework to JDK 7, was the first step that java took in the direction of multicore CPUs. JDK 8 takes this road even further by bringing closures’ support to java (lambda expression, that is). Probably the most affected part of java will be the Collections part, the closures combined with the newly added interfaces and functionalities pushing the java containers to the next level. Besides the more readable and shorter code to be written, by providing to the collections a lambda expression that will be executed internally the platform can take advantage of multicore processors.


One of the most interresting pieces for the community was project jigsaw: “The goal of this Project is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK.”. I am using past tense because, for the those of us that were hoping to get rid of the classpaths and classloaders, we have to postpone our exciment for Java 9, as for that point of time was also project jigsaw postponed.
To have a clearer picture of how the remaning Java Roadmap 2013:

2013/01/31 M6 Feature Complete
2013/02/21 M7 Developer Preview
2013/07/05 M8 Final Release Candidate
2013/09/09 GA General Availability

Besides project jigsaw another big and exciting change that will come (in this version), is the support for closures. Provided through the help of lambda expressions they will improve key points of the JDK.


Getting started

First and first of all one should get a lambda enabled SDK. In this direction there are two ways to obtain one:

 * the one intended for the brave ones: build it from the sources 
 * the convenient one: downloading an already compiled version of the SDK 

 Initially I started with building it from the sources, but due to the lack of time and too many warnings related to environment variables, I opted for the lazy approach and took the already existing JDK. The other important tool is a text editor to write the code. As it happened until now, tipically first came the JDK release and after a period of time, an enabled IDE came out. This time it is different, maybe also due to the transparency and the broad availability of the SDK through openjdk. Some days ago, the first Java 8 enabled IDE was realesed by JetBrain. So IntelliJ IDEA version 12 is the first IDE to provide support for JDK 8, besides are improvements? So for testing purposes I used IntelliJ 12 Community Edition together with JDK 8 b68, on a Windows 7, x64 machine. For those of you that prefer Netbeans, a nightly build with lambda support is available for download.

Adjusting to the appropriate mindset.

Before starting to write improved and cleaner code using the newly provided features, one must get a grasp on a couple new concepts – I needed to, anyway.

  • What is a lambda expression? The easiest way to see a lambda expression is just like a method: “it provides a list of formal parameters and a body—an expression or block— expressed in terms of those parameters.The parameters of a lambda expression can be either declared or inferred, when the formal parameters have inferred types, then these types are derived from the functional interface type targeted by the lambda expression. From the point of view of the returned value, a lambda expression can be void compatible – they don’t return anything or value compatible – if any given execution path returns a value.
    Examples of lambda expressions:
    (a) (int a, int b) -> a + b

    (b) (int a, int b) -> {
    if (a > b) {
    return a;
    } else if (a == b) {
    return a * b;
    } else {
    return b;

  • What is a functional interface? A functional interface is an interface that contains just one abstract method, hence represents a single method contract. In some situations, the single method may have the form of multiple methods with override-equivalent signatures, in this case all the methods represent a single method. Besides the typical way of creating an interface instance by creating and instantiating a class, functional interface instances can be created also by usage of lambda expressions, method or constructor references.
    Example of functional interfaces:
    // custom built functional interface
    public interface FuncInterface {
    public void invoke(String s1, String s2);

    Example of functional interfaces from the JAVA API:


    So let’s see how the starting of a thread might change in the future:
    OLD WAY:

       new Thread(new Runnable() {
    public void run() {
    for (int i=0; i< 9; i++) {
    System.out.println(String.format("Message #%d from inside the thread!", i));

    NEW WAY:

       new Thread(() -> {
    for (int i=0; i< 9; i++) {
    System.out.println(String.format("Message #%d from inside the thread!", i));

    Even if I didn’t write for some time any java Swing, AWT related functionality I have to admit that lambdas will give a breath of fresh air to the Swing developers Action listener addition:

      JButton button = new JButton("Click");

    // NEW WAY:
    button.addActionListener( (e) -> {
    System.out.println("The button was clicked!");

    // OLD WAY:
    button.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) {
    System.out.println("The button was clicked using old fashion code!");

  • Who/What is SAM? SAM stands for Single Abstract Method, so to cut some corners we can say that SAM == functional interface. Even if in the initial specification, also abstract classes with only one abstract method were considered SAM types, some people found/guessed also the reason why. 
  • Method/Constructor referencing
  • The lambdas sound all nice and all? But somehow the need for functional interface is somehow to some extend restrictive – does this mean that I can use only interfaces that contain a single abstract method? Not really – JDK 8 provides an aliasing mechanism that allows “extraction” of methods from classes or objects. This can be done by using the newly added :: operator. It can be applied on classes – for extraction of static methods or on objects for extraction of methods. The same operator can be used for constructors also.

    interface ConstructorReference {
    T constructor();

    interface MethodReference {
    void anotherMethod(String input);

    public class ConstructorClass {
    String value;

    public ConstructorClass() {
    value = "default";

    public static void method(String input) {

    public void nextMethod(String input) {
    // operations

    public static void main(String... args) {
    // constructor reference
    ConstructorReference reference = ConstructorClass::new;
    ConstructorClass cc = reference.constructor();

    // static method reference
    MethodReference mr = cc::method;

    // object method reference
    MethodReference mr2 = cc::nextMethod;


  • Default methods in interfaces
  • This means that from version 8, java interfaces can contain method bodies, so to put it simple java will support multiple inheritance without the headaches that usually come with it. Also, by providing default implementations for interface methods one can assure ensure that adding a new method will not create chaos in the implementing classes. JDK 8 added default methods to interfaces like java.util.Collection or java.util.Iterator and through this it provided a mechanism to better use lambdas where it is really needed.
    Notable interfaces added:

    Improved collections’ interaction In my opinion all the changes that come with project lambda are great additions to the language, that will make it align with the current day standards and make it simpler and leaner but probably the change that will have the biggest productivity impact and the biggest cool + wow effect is definitely the revamping of the collections framework. No, there is no Collection 2 framework, we still have to cope with type erasure for now, but java will make another important shift: from external to internal iteration. By doing so, it provides the developer the mechanism to filter and aggregate collections in an elegant manner and besides this to push for more efficiency. By providing a lambda expression that will be executed internally, so multicore processors can be used to their full power. Let’s consider the following scenarios:
    a. Considering a list of strings, select all of them that are uppercased written. How would this be written? OLD WAY:


    List inputList = new LinkedList<>();
    List upper = new LinkedList<>();

    // add elements

    for (String currentValue : inputList) {
    if (currentValue != null && currentValue.matches("[A-Z0-9]*")) {


    //….. NEW WAY:

    //..... -> (x != null && x.matches("[A-Z0-9]*"))).into(upper);

    b. Consider that you would like to change all the extracted characters to lowercase. Using the JDK8 way this would look like this:

    // ..... -> (x != null && x.matches("[A-Z0-9]*"))).map(String::toLowerCase).into(upper);

    c. And how about finding out the number of characters from the selected collection

    // .....

    int sumX = -> (x != null && x.matches("[A-Z0-9]*"))).map(String::length).reduce(0, Integer::sum);

    Used methods:

     default Stream stream() // java.util.Collection
    Stream filter(Predicate predicate) //
    IntStream map(IntFunction mapper) //

    d. What if I would like to take each element from a collection and print it?

     //OLD WAY:
    for (String current : list) {

    //NEW WAY:
    list.forEach(x -> System.out.println(x));

    Besides the mentioned functionality, JDK 8 has are other interesting news also, but for brevity reasons I will stop here. More information about it can be found on the JDK 8 Project lambda site or the webpage of the JSR 337.
    To conclude, Java is moving forward and I personally like the direction it is heading, another point of interest would be to point of time when library developers start adopting JDK 8 too. That will be for sure interesting. Thank you for your time and patience, I wish you a merry Christmas.


    Brian Goetz resource folder:
    Method/constructor references:

Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazstocontribute!
Disclaimer: This post was based on the JDK8 lambda enabled SDK from 15. December 2012, some features might be subject to change.