Quick Web App Prototyping with Spring Boot & MongoDB

Back in one of my previous projects I was asked to produce a little contingency application. The schedule was tight and the scope simple. The in-house coding standard is PHP, so trying to get a classic Java EE stack in place would have been a real challenge. And, to be really honest, completely oversized. So, what then? I took the chance and gave Spring a try. I used it before, but in old versions, hidden away in the tech stack of the portal software I was plagued with at this time.

My goal was to have something the WebOps can simply put on a server with Java installed and run it. No fiddling with dozens of XML configurations and memory fine tuning. Just as easy as java -jar application.jar.
It was the perfect call for “Spring Boot”. This Spring project is all about making it easy to bring you, the developer, up to speed and take away the need of loads of configuration and boilerplate coding.

Another thing my project was crying for was a document-oriented data storage. I mean, the main purpose of the application was to offer a digital version of a real-world paper form. So why create a relational mess if we can represent the document as a document?! I used MongoDB in a couple of small projects before, so I decided to go with it.

What has this got to do with this article? Well, I will show you how quickly you can bring together all the bits and pieces needed for a web application. Spring Boot will make a lot of things fairly easy and will keep the code minimal. And at the end you will have a JAR file, which is executable and can be deployed by just dropping it onto a server. Your WebOps will love you for it.

Let’s imagine we are about to create the next big product administration web application. As it is the next big thing, it needs a big name: Productr (this is the reason I am a software engineer and not in sales or marketing…).
Productr will do amazing things and this article will show you its early stages, which are:

  • providing a simple REST interface to query all available products
  • loading these products from a MongoDB
  • providing a production-ready monitoring facility
  • displaying all products by using a JavaScript UI

All you need to start is:

  • Java 8
  • Maven
  • Your favourite IDE (IntelliJ, Eclipse, vi, edlin, a butterfly…)
  • A browser (ok, or Internet Explorer / MS Edge, but who would really want this?!)

And for the impatient, the code is also available on GitHub.

Let’s get started

Create a pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.3.0.RELEASE</version>
    </parent>

    <modelVersion>4.0.0</modelVersion>
    <groupId>net.h0lg.tutorials.rapid</groupId>
    <artifactId>rapid-resting</artifactId>
    <version>1.0</version>


    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>


    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
</project>

In these few lines a lot of stuff is already happening. Most important is the defined parent project. This will bring us a lot of useful and needed dependencies like logging, the Tomcat runtime and lots more. Thanks to Spring’s modularity, everything is re-configurable via pom.xml or dependency injection. For getting everything up quickly the defaults are absolutely fine. (Convention over configuration, anybody?)

Now, create the obligatory Maven folder structure:

mkdir -p src/main/java src/main/resources src/test/java src/test/resources

And we are settled.

Start the engines

Let’s get to work. We want to offer a REST interface to get access to our huge amount of products. So let’s start with creating a REST collection available under /api/products. To do so we have to do a few things:

  1. Our “data model” holding all information about our incredible products needs to be created
  2. We need a controller offering a method which does everything necessary to answer a GET request
  3. Create the main entry point for our application

The data model is pretty simple and done quickly. Just create a package called demo.model and a class called Product in it. The Product class is very straightforward:

package demo.model;

import java.io.Serializable;

/**
 * Our very important and sophisticated data model
 */
public class Product implements Serializable {

    String productId;
    String name;
    String vendor;

    public String getProductId() {
        return productId;
    }

    public void setProductId(String productId) {
        this.productId = productId;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getVendor() {
        return vendor;
    }

    public void setVendor(String vendor) {
        this.vendor = vendor;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Product product = (Product) o;

        if (getProductId() != null ? !getProductId().equals(product.getProductId()) : product.getProductId() != null)
            return false;
        if (getName() != null ? !getName().equals(product.getName()) : product.getName() != null) return false;
        return !(getVendor() != null ? !getVendor().equals(product.getVendor()) : product.getVendor() != null);

    }

    @Override
    public int hashCode() {
        int result = getProductId() != null ? getProductId().hashCode() : 0;
        result = 31 * result + (getName() != null ? getName().hashCode() : 0);
        result = 31 * result + (getVendor() != null ? getVendor().hashCode() : 0);
        return result;
    }
}

Our product has the incredible amount of 3 properties: an alphanumeric product ID, a name and a vendor (just the name, to keep things simple). It is serialisable and the getters, setters and the methods equals() & hashCode() are implemented by using my IDE’s code generation.

Alright, so creating a controller with a method to offer the GET listener it is now. Go back to your favourite IDE and create the package demo.controller and a class called ProductsController with the following content:

package demo.controller;

import demo.model.Product;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

/**
 * This controller provides the REST methods
 */
@RestController
@RequestMapping(value = "/", method = RequestMethod.GET)
public class ProductsController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        List products = new ArrayList();

        return products;
    }

}

This is really everything you need to provide a REST interface. Ok, at the moment, an empty list is returned, but it is that easy to define.

The last thing missing is an entry point for our application. Just create a class called Productr in the package demo and give it the following content:

package demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * This is the entry point of our application
 */
@SpringBootApplication
public class ProductrApplication {

    public static void main (String... opts) {
        SpringApplication.run(ProductrApplication.class, opts);
    }

}

Spring Boot saves us a lot of keystrokes. @SpringBootApplication does a few things we would need for every web application anyway. This annotation is shorthand for the following ones:

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan

Now it is time to start our application for the first time. Thanks to Spring Boot’s maven plugin, which we configured in our pom.xml, starting the application is as easy as: mvn spring-boot:run. Just run this command in your project root directory. You prefer the lazy point-n-click way provided by your IDE? Alright, just instruct your favourite IDE to run ProductrApplication.

Once it is started, use a browser, a REST client (you should check out Postman, I love this tool) or a command line tool like curl. The address you are looking for is: http://localhost:8080/api/products/. So, with curl, the command looks like this:


curl http://localhost:8080/api/products/

Data please

Ok, returning an empty list isn’t that shiny, is it? So let’s bring in data.
In many projects a classic relational database is usually overkill (and painful if you have to use it AND scale out). This may be one reason for the hype around NoSQL databases. One (in my opinion good) example is MongoDB.

Getting MongoDB up and running is pretty easy. On Linux you can use your package manager to install it. For Debian / Ubuntu, for example, simply do: sudo apt-get install mongodb.

For Mac, the easiest way is homebrew: brew install mongodb and follow the instructions in the “Caveats” section.

Windows users should go with the MongoDB installer (and toi toi toi).

Alright, we just got our data store sorted. It is about time to use it.
There is one particular Spring project dealing with data – called Spring Data. And by sheer coincidence a sub-project called Spring Data MongoDB is just waiting for us. Even more, Spring Boot provides a dependency package to get up to speed instantly. No wonder that the following few lines in the pom.xml‘s <dependencies> section are enough to bring in everything we need:


  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
  </dependency>

Now, create a new package called demo.domain and put in a new interface called ProductRepository. Spring provides a pretty neat way to get rid of writing code which is usually needed to interact with a data source. Most of the basic queries are generated by Spring Data – all you need is to define an interface. A couple of query methods are available without even specifying method headers. One example is the findAll() method, which will return all entries in the collection.
But hey, let’s see it in action instead of talking about it. The bespoke ProductRepository interface should look like this:

package demo.domain;

import demo.model.Product;
import org.springframework.data.mongodb.repository.MongoRepository;

/**
 * This interface lets Spring generate a whole Repository implementation for
 * Products.
 */
public interface ProductRepository extends MongoRepository {

}

Next, create a class called ProductService in the same package. Purpose of this class is to actually provide some useful methods to query products. For now, the code is as easy as this:

package demo.domain;

import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;

/**
 * This is a little service class we will let Spring inject later.
 */
@Service
public class ProductService {

    @Autowired
    private ProductRepository repository;

    public List getProducts() {
        return repository.findAll();
    }

}

See how we can use repository.findAll() without even defining it in the interface? Pretty slick, isn’t it? Especially if you are in a hurry and need to get things up quickly.

Alright, so far we prepared the foundation for the data access. I think it is time to wire it together. To do so, simply head back to our class demo.controller.ProductsController and modify it slightly. All we have to do is to inject our shiny new ProductService service and call its getProducts() method. The class will look like this afterwards:

package demo.controller;

import demo.domain.ProductService;
import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

/**
 * This controller provides the REST methods
 */
@RestController
@RequestMapping("/api/products/")
public class ProductsController {

    // Let Spring DI inject the service for us
    @Autowired
    private ProductService productService;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        // Ask the data store for a list of products
        return productService.getProducts();
    }

}

That’s it. Start MongoDB (if not already running), start our application again (remember the mvn spring-boot:run thingy?!) and start another GET request to http://localhost:8080/api/products/:


$ curl http://localhost:8080/api/products/
[]

Wait, still an empty list? Yes, or do you remember us putting anything into the database? Let’s change this by using the following command:


mongo localhost/test --eval "db.product.insert({productId: 'a1234', name: 'Our First Product', vendor: 'ACME'})"

This adds one product called “Our First Product” to our database. Ok, so what is our service returning now? This:

$ curl http://localhost:8080/api/products/
[{"productId":"5657654426ed9d921affc3c0","name":"Our First Product","vendor":"ACME"}]

Easy, wasn’t it?!

Looking for a little more data but no time to create it yourself? Alright, it’s nearly Christmas, so take my little test selection:

curl https://gist.githubusercontent.com/daincredibleholg/f8667a26ce2f17776903/raw/ed9b4c8ec6c9c455dc063e833af2418648928ba6/quick-web-app-product-example.json | mongoimport -d test -c product --jsonArray

Basic requirements at your fingertips

In today’s hectic days and with “microservice” culture spreading, it is getting harder and harder to keep an eye on what is really running on your servers or cloud environments. So in nearly all environments I was working on over the last years monitoring was a big thing. One common pattern is to provide health check endpoints. One can find everything from simple ping endpoints to health metrics, returning a detailed overview of business relevant metrics.
All of this is most of the times a copy-n-paste adventure and involves tackling a lot of boilerplate code. Here is what we have to do – simply add the following dependency to your pom.xml:


  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>

and restart the service. Let’s have a look what happens if we query http://localhost:8080/health:


$ curl http://localhost:8080/health
{"status":"UP","diskSpace":{"status":"UP","total":499088621568,"free":83261571072,"threshold":10485760},"mongo":{"status":"UP","version":"3.0.7"}}

This should provide sufficient data for a basic health check. If you follow the startup log messages, you’ll probably spotted a number of other endpoints. Experiment a bit and check the Actuator documentation for more information.

Show it to me

Ok, we got ourselves a REST service and some data. But we want to show this data to our users. So let’s go on and provide a page with an overview of our awesome products.

Thank Santa that there is a really active web UI community working on loads of nice and easy usable frontend frameworks and libraries. One pretty popular example is Bootstrap. It is easy to use and all the needed bits and pieces are provided via open CDNs.

We want to have a short overview of our products, so a table view would be nice. Bootstrap Table will help us with that. It is built on top of Bootstrap and also available via CDNs. What a world we live in…

But wait, where to put our HTML file? Spring Boot makes it easy, again. Just create a folder called src/main/resources/static and create a new HTML file called index.html with the following content:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">

    <title>Productr</title>

    <!-- Import Bootstrap CSS from CDNs -->
    <link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
    <link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/bootstrap-table/1.9.1/bootstrap-table.min.css">
</head>
<body>
<nav class="navbar navbar-inverse">
    <div class="container">
        <div class="navbar-header">
            <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
                <span class="sr-only">Toggle navigation</span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
            </button>
            <a class="navbar-brand" href="#">Productr</a>
        </div>
        <div id="navbar" class="collapse navbar-collapse">
            <ul class="nav navbar-nav">
                <li class="active"><a href="#">Home</a></li>
                <li><a href="#about">About</a></li>
                <li><a href="#contact">Contact</a></li>
            </ul>
        </div><!--/.nav-collapse -->
    </div>
</nav>
    <div class="container">
        <table data-toggle="table" data-url="/api/products/">
            <thead>
            <tr>
                <th data-field="productId">Product Reference</th>
                <th data-field="name">Name</th>
                <th data-field="vendor">Vendor</th>
            </tr>
            </thead>
        </table>
    </div>


<!-- Import Bootstrap, Bootstrap Table and JQuery JS from CDNs -->
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/bootstrap-table/1.9.1/bootstrap-table.min.js"></script>
</body>
</html>

This file isn’t pretty complex. It is just a HTML file, which includes the minimised CSS files from the CDNs. If you see a reference like //maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css for the first time, it is not a bad mistake that the protocol (http or https) is missing. A resource referenced that way will be loaded via the same protocol the main page got loaded with. Say, if you use http://localhost:8080/, it will use http: to load the CSS files.

The <body> block contains a navigation bar (using the HTML5 <nav> tag) and a table. The interesting part of this table definition is the provided data-url attribute. It is interpreted by Bootstrap Table to load the data. Our definition points to our previously created REST endpoint.
Which part of our JSON objects is used in which column is defined via the data-field attributes on the <th> definitions. Can you spot the matching attribute names?

Last but not least we load the needed JavaScript libraries. All Bootstrap-related JavaScript functionality needs JQuery, so this is the first library to load. Followed straight by the main Bootstrap and the Bootstrap Table JavaScript files. Each of these library files is loaded in the minimised version, to keep download times at a minimum.

Where to go now

It is fair to say that we have a really simple web application now. Well, the main purpose of this article was to show you how to get up to speed with as little code as possible. You’ve seen that sometimes just a dependency in your POM file brings you a complete new feature, without the need of any additional line of code.
Take a step back, look at what we’ve built so far and think about the next steps needed. And just start to take a look around in the Spring universe.

I think one of the most crucial steps needed next, beside adding the missing tests, is to bring in security. Check out Spring Security and its subprojects Spring Security OAuth.
More interested in “classic” web pages? Check out Spring MVC and how easy it is to integrate quite sophisticated template engines (e. g. by following this guide).

Hopefully, you enjoyed this article as much as I enjoyed its creation. I wish you all a merry Christmas and if the one or the other wants to get in touch, you can find me e. g. on Twitter, G+ and LinkedIn.

Migrating Spring App to MicroServices App on AWS

Migrating Spring App to MicroServices App on AWS

 

The company I am working for has recently gone through a migration of refactoring our code base from a monolithic application (Java Spring WAR) into a MicroServices Application hosted on the Amazon PAAS (specifically Beanstalk and CloudFront). As part of this blog post I have provided a small and simple Sales Demo application and will discuss the steps of what is required for refactoring the application so that it can be run within Beanstalk/S3/CloudFront environments.

For the purposes of this blog, I will be using a SalesTax demo application and the code can be found here (https://github.com/shannonlal/salesdemo). This site will provide users a list of products and give them the ability to create an order and apply sales tax. I have created a more detailed guide, which includes steps for creating the different services in AWS. The guide can be found at this location (https://github.com/shannonlal/salesdemo/AWS-MigrationGuide.pdf). The following is a diagram of the Spring Architecture:

FirstPicture

 

The above architecture is a pretty standard Spring architecture for most monolithic web applications. In our migration, we broke up our code and separated the backend services from the front end content JSPs(Now HTML), CSS and JS. The following is a diagram illustrating our model of how we controlled access:

SecondPicture

Amazon Web Services

I am going to start by explaining at a high-level what these different components in AWS are and how we integrate them together.

 

Route 53

Route 53 is a Domain Name Service(https://aws.amazon.com/route53/) which allows you to route traffic to different internal AWS services. In our model we used Route 53 to host our DNS servers (for example www.mycompany.com).

 

S3

Amazon S3 (https://aws.amazon.com/s3/) is a simple storage service which allows you to store content (html, css, js files in buckets in the cloud). In this demo we will be using Amazon S3 to host the static content (html, css, and JS).

 

Beanstalk

Beanstalk (https://aws.amazon.com/elasticbeanstalk/)is an application stack which will be used to host our individual services. Beanstalk has access to multiple stacks (Tomcat, PHP, Node, Ruby, Go, .Net). In this demo we will be using Beanstalk to host our different web services (as Spring WARS running on Tomcat).

 

RDS

Amazon Relational Database Service (RDS https://aws.amazon.com/rds/) will be used to host our database. We will create an RDS database and our web services will be used to connect to the database.

 

CloudFront

Amazon CloudFront is the glue that will tie all your different services together under one common URL. We will define an origin (which will correspond to our URL, defined in Route 53 www.mycompany.com). When the user hits this URL Route53 will route the traffic to CloudFront. CloudFront will host the content and push it to edge locations around the world. In CloudFront you are able to redirect traffic based on URL patterns. For example anyone coming to the default pattern (/*) can be redirected to a bucket in S3 which hosts your static content (i.e. html, css, images). If they come to say an API URL (/api/products) you can route them to a Beanstalk service in the backend.

Infrastructure Security

In our production systems we have all our web services hidden behind different VPCs and have implemented network rules to restrict access to our backend services. I do not think I will have time to address this in this blog, but will try to talk about this in my next.

 

Application Security

One major component I have not included in the Sales Demo is Spring Security. In our application, we removed our Spring Security and replaced access control using an API Gateway. I will discuss this concept briefly at the end of this blog.

 

NOTE: AWS is a very sophisticated and complex ecosystem that provides multiple ways to integrate these different services. The model I will be discussing is similar to the model which we implemented at our company.

 

SalesTax Application Overview

 

The SalesTax Demo application will look like a traditional Spring Application with one exception. The JSP pages do not follow the traditional Spring MVC model with data being passed from the controller and then the JSP pages rendering the view. Instead we are using Angular, which makes REST calls to the backend controllers and renders of the content in the browser. The reason that we are doing this is so that we can migrate our static content (html, css, js files) to S3 buckets and have our backend services run in beanstalk.

 

I have created a guide, which provides step-by-step instructions with pictures on how to setup your environment in AWS. You can find a link to the document on github at this location. The rest of the document will provide a summary of the process with references to the guide. If you would like to try this on your own AWS setup I recommend you look at the detailed guide here ( https://github.com/shannonlal/salesdemo/AWS-MigrationGuide.pdf ).

 

Migration Process

 

The following section will provide a high-level overview of the migration process. Again if you would like to try this out for yourself, I would recommend using the detailed guide.

 

Deploy Application to Beanstalk

 

The first step will be to build the application and deploy it into a beanstalk instance. To checkout the code please run the following command:

Git clone https://github.com/shannonlal/salesdemo step0

 

You can import the project into your IDE (Eclipse, NetBeans, STS, etc) or you can just build this from the command line. To build the project run the following commands:

 

mvn clean install

 

Once the WAR has been built, log into the AWS Adminstration console and deploy your WAR in a new Beanstalk Instance. For detailed instructions see the install guide

 

Configure CloudFront to point to yourBeanstalk Instance

 

Login into the Amazon Console and click on the CloudFront link. At this point you have two options:

-Use your own domain name( www.example.com)

-Use the default provided by Cloud Front(this will look something like https://xxxxxxxxxx.cloudfront.net).

If you already have your own domain name you can add it to Route 53. The following link provides detailed instructions on how to do this (http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html). If you do not have your own you can just create a CloudFront Origin and it will give you a url.

 

The goal of this step is to use CloudFront to map your url (either your own www.example.com or generated https://xxxxxxxxxx.cloudfront.net) to your hosted application in BeanStalk. In CloudFront you will define a Web Distribution and then for that distribution you will define an Origin.   Origins in Cloud Front represent backend services (i.e. S3 buckets to host static content or Beanstalk Applications which host your Spring Apps). Finally, you will create a Behavior that will instruct CloudFront to map all requests of a certain url pattern to a specific Beanstalk Instance. For first step we will map all requests (/) to the Beanstalk instance. In future steps will map all requests of the format (/api/) to your Beanstalk instance and the rest (/*) will go to your S3 Bucket. Below is an image of what the screen for creating a Behavior would look like.

ThirdPicture

Create RDS Postgres instance and connect to Beanstalk

 

In this step we create a publicly accessible RDS instance and then connect to it from our pgAdmin tool to create the database. The sql script and updated code can be found by pulling down the step1 branch as follows:

 

Git clone https://github.com/shannonlal/salesdemo step1

 

The sql create script can be found in the following location

src/resources/sql/ createSalesTax-DB-Postgres.sql

 

Once your database is created you can rebuild your project with maven using the following command:

mvn clean install

 

Log back into your Amazon console and redeploy your latest war file. You will also need to append environment properties to your Beanstalk instance so it knows where to find your database. This can be done by clicking on Configuration, Software Configuration, and adding them to Environment Properties

Third-A

If you reload your application you will see that it is now pulling the products from the database instance in AWS.

 

Create an S3 Bucket and deploy Static Content to it

 

In this step we are going to create an S3 bucket and will move our Static Content (html, css, images, etc) to it. To get the latest code for this we will need to pull down the latest changes from the git. Run the following command

 

 

Git clone https://github.com/shannonlal/salesdemo step2

 

Log back into the Amazon Console and click on S3. Click on Create Bucket and create a new bucket.

 

ForthPicture

Once your bucket is created, click on Properties (upper right corner) and click on Static Website Hosting to enable hosting of content. Once your S3 bucket is ready you can transfer the static content of the project to S3. The code to transfer is in the following directory:

web/build/prod/

Update Cloud Front to reflect new origins

We will need to update CloudFront to redirect the requests to their appropriate origins. The first step will be to log into CloudFront and create an Origin for your newly created bucket. Once your Origin has been created you will need to modify the Behavior so that your default Behavior () now points to your static content in S3 and your API requests (/api/) are redirected to your Elastic Beanstalk instance.  The following is a diagram of the proposed changes to CloudFront.

FifthPicture

Redeploy Application

Once CloudFront has been updated and the status has changed to deployed, your static content, which is hosted in S3, will now be accessible by your CloudFront url. The only thing left to do is rebuild the sales demo application and redeploy it into Beanstalk. At this stage, all the front end code (html, js, css) has been moved to the web directory and the backend functionality is in the services directory. To rebuild your application run the maven command in services directory

 

mvn clean install

 

Log back into the Amazon Console and redeploy your Beanstalk application with the new WAR.

The above architecture is a good starting point for anyone who is looking at migrating their Spring application to a cloud based MicroServices. As part of your migration I would suggest looking at incorporating an API Gateway. There are a series of open source and commercially available API Gateways (Amazon released their API Gateway in July 2015, membrane-soa.org/, etc). The API Gateway will sit in between CloudFront and your backend services and will handle authentication and access control, and it will redirect your requests to the appropriate Beanstalk instance.   I have included a picture of the API Gateway below.

SixthPicture

 

 

 

Reactive Development Using Vert.x

Lately, it seems like we’re hearing about the latest and greatest frameworks for Java. Tools like Ninja, SparkJava, and Play; but each one is opinionated and make you feel like you need to redesign your entire application to make use of their wonderful features. That’s why I was so relieved when I discovered Vert.x. Vert.x isn’t a framework, it’s a toolkit and it’s un-opinionated and it’s liberating. Vert.x doesn’t want you to redesign your entire application to make use of it, it just wants to make your life easier. Can you write your entire application in Vert.x? Sure! Can you add Vert.x capabilities to your existing Spring/Guice/CDI applications? Yep! Can you use Vert.x inside of your existing JavaEE applications? Absolutely! And that’s what makes it amazing.

Background

Vert.x was born when Tim Fox decided that he liked a lot of what was being developed in the NodeJS ecosystem, but he didn’t like some of the trade-offs of working in V8: Single-threadedness, limited library support, and JavaScript itself. Tim set out to write a toolkit which was unopinionated about how and where it is used, and he decided that the best place to implement it was on the JVM. So, Tim and the community set out to create an event-driven, non-blocking, reactive toolkit which in many ways mirrored what could be done in NodeJS, but also took advantage of the power available inside of the JVM. Node.x was born and it later progressed to become Vert.x.

Overview

Vert.x is designed to implement an event bus which is how different parts of the application can communicate in a non-blocking/thread safe manner. Parts of it were modeled after the Actor methodology exhibited by Eralng and Akka. It is also designed to take full advantage of today’s multi-core processors and highly concurrent programming demands. As such, by default, all Vert.x VERTICLES are implemented as single-threaded by default. Unlike NodeJS though, Vert.x can run MANY verticles in MANY threads. Additionally, you can specify that some verticles are “worker” verticles and CAN be multi-threaded. And to really add some icing on the cake, Vert.x has low level support for multi-node clustering of the event bus via the use of Hazelcast. It has gone on to include many other amazing features which are too numerous to list here, but you can read more in the official Vert.x docs.

The first thing you need to know about Vert.x is, similar to NodeJS, never block the current thread. Everything in Vert.x is set up, by default, to use callbacks/futures/promises. Instead of doing synchronous operations, Vert.x provides async methods for doing most I/O and processor intensive operations which might block the current thread. Now, callbacks can be ugly and painful to work with, so Vert.x optionally provides an API based on RxJava which implements the same functionality using the Observer pattern. Finally, Vert.x makes it easy to use your existing classes and methods by providing the executeBlocking(Function f) method on many of it’s asynchronous APIs. This means you can choose how you prefer to work with Vert.x instead of the toolkit dictating to you how it must be used.

The second thing to know about Vert.x is that it composed of verticles, modules, and nodes. Verticles are the smallest unit of logic in Vert.x, and are usually represented by a single class. Verticles should be simple and single-purpose following the UNIX Philosophy. A group of verticles can be put together into a module, which is usually packaged as a single JAR file. A module represents a group of related functionality which when taken together could represent an entire application or just a portion of a larger distributed application. Lastly, nodes are single instances of the JVM which are running one or more modules/verticles. Because Vert.x has clustering built-in from the ground up, Vert.x applications can span nodes either on a single machine or across multiple machines in multiple geographic locations (though latency can hider performance).

Example Project

Now, I’ve been to a number of Meetups and conferences lately where the first thing they show you when talking about reactive programming is to build a chat room application. That’s all well and good, but it doesn’t really help you to completely understand the power of reactive development. Chat room apps are simple and simplistic. We can do better. In this tutorial, we’re going to take a legacy Spring application and convert it to take advantage of Vert.x. This has multiple purposes: It shows that the toolkit is easy to integrate with existing Java projects, it allows us to take advantage of existing tools which may be entrenched parts of our ecosystem, and it also lets us follow the DRY principle in that we don’t have to rewrite large swathes of code to get the benefits of Vert.x.

Our legacy Spring application is a contrived simple example of a REST API using Spring Boot, Spring Data JPA, and Spring REST. The source code can be found in the “master” branch HERE. There are other branches which we will use to demonstrate the progression as we go, so it should be simple for anyone with a little experience with git and Java 8 to follow along. Let’s start by examining the Spring Configuration class for the stock Spring application.


@SpringBootApplication
@EnableJpaRepositories
@EnableTransactionManagement
@Slf4j
public class Application {
    public static void main(String[] args) {
        ApplicationContext ctx = SpringApplication.run(Application.class, args);

        System.out.println("Let's inspect the beans provided by Spring Boot:");

        String[] beanNames = ctx.getBeanDefinitionNames();
        Arrays.sort(beanNames);
        for (String beanName : beanNames) {
            System.out.println(beanName);
        }
    }

    @Bean
    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();
    }

    @Bean
    public EntityManagerFactory entityManagerFactory() {
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        vendorAdapter.setGenerateDdl(true);

        LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
        factory.setJpaVendorAdapter(vendorAdapter);
        factory.setPackagesToScan("com.zanclus.data.entities");
        factory.setDataSource(dataSource());
        factory.afterPropertiesSet();

        return factory.getObject();
    }

    @Bean
    public PlatformTransactionManager transactionManager(final EntityManagerFactory emf) {
        final JpaTransactionManager txManager = new JpaTransactionManager();
        txManager.setEntityManagerFactory(emf);
        return txManager;
    }
}

As you can see at the top of the class, we have some pretty standard Spring Boot annotations. You’ll also see an @Slf4j annotation which is part of the lombok library, which is designed to help reduce boiler-plate code. We also have @Bean annotated methods for providing access to the JPA EntityManager, the TransactionManager, and DataSource. Each of these items provide injectable objects for the other classes to use. The remaining classes in the project are similarly simplistic. There is a Customer POJO which is the Entity type used in the service. There is a CustomerDAO which is created via Spring Data. Finally, there is a CustomerEndpoints class which is the JAX-RS annotated REST controller.

As explained earlier, this is all standard fare in a Spring Boot application. The problem with this application is that for the most part, it has limited scalability. You would either run this application inside of a Servlet container, or with an embedded server like Jetty or Undertow. Either way, each requests ties up a thread and is thus wasting resources when it waits for I/O operations.

Switching over to the Convert-To-Vert.x-Web branch, we can see that the Application class has changed a little. We now have some new @Bean annotated methods for injecting the Vertx instance itself, as well as an instance of ObjectMapper (part of the Jackson JSON library). We have also replaced the CustomerEnpoints class with a new CustomerVerticle. Pretty much everything else is the same.

The CustomerVerticle class is annotated with @Component, which means that Spring will instantiate that class on startup. It also has it’s start method annotated with @PostConstruct so that the Verticle is launched on startup. Looking at the actual content of the code, we see our first bits of Vert.x code: Router.

The Router class is part of the vertx-web library and allows us to use a fluent API to define HTTP URLs, methods, and header filters for our request handling. Adding the BodyHandler instance to the default route allows a POST/PUT body to be processed and converted to a JSON object which Vert.x can then process as part of the RoutingContext. The order of routes in Vert.x CAN be significant. If you define a route which has some sort of glob matching (* or regex), it can swallow requests for routes defined after it unless you implement chaining. Our example shows 3 routes initially.


    @PostConstruct
    public void start() throws Exception {
        Router router = Router.router(vertx);
        router.route().handler(BodyHandler.create());
        router.get("/v1/customer/:id")
                .produces("application/json")
                .blockingHandler(this::getCustomerById);
        router.put("/v1/customer")
                .consumes("application/json")
                .produces("application/json")
                .blockingHandler(this::addCustomer);
        router.get("/v1/customer")
                .produces("application/json")
                .blockingHandler(this::getAllCustomers);
        vertx.createHttpServer().requestHandler(router::accept).listen(8080);
    }

Notice that the HTTP method is defined, the “Accept” header is defined (via consumes), and the “Content-Type” header is defined (via produces). We also see that we are passing the handling of the request off via a call to the blockingHandler method. A blocking handler for a Vert.x route accepts a RoutingContext object as it’s only parameter. The RoutingContext holds the Vert.x Request object, Response object, and any parameters/POST body data (like “:id”). You’ll also see that I used method references rather than lambdas to insert the logic into the blockingHandler (I find it more readable). Each handler for the 3 request routes is defined in a separate method further down in the class. These methods basically just call the methods on the DAO, serialize or deserialize as needed, set some response headers, and end() the request by sending a response. Overall, pretty simple and straightforward.


    private void addCustomer(RoutingContext rc) {
        try {
            String body = rc.getBodyAsString();
            Customer customer = mapper.readValue(body, Customer.class);
            Customer saved = dao.save(customer);
            if (saved!=null) {
                rc.response().setStatusMessage("Accepted").setStatusCode(202).end(mapper.writeValueAsString(saved));
            } else {
                rc.response().setStatusMessage("Bad Request").setStatusCode(400).end("Bad Request");
            }
        } catch (IOException e) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", e);
        }
    }

    private void getCustomerById(RoutingContext rc) {
        log.info("Request for single customer");
        Long id = Long.parseLong(rc.request().getParam("id"));
        try {
            Customer customer = dao.findOne(id);
            if (customer==null) {
                rc.response().setStatusMessage("Not Found").setStatusCode(404).end("Not Found");
            } else {
                rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(dao.findOne(id)));
            }
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

    private void getAllCustomers(RoutingContext rc) {
        log.info("Request for all customers");
        List customers = StreamSupport.stream(dao.findAll().spliterator(), false).collect(Collectors.toList());
        try {
            rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(customers));
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

“But this is more code and messier than my Spring annotations and classes”, you might say. That CAN be true, but it really depends on how you implement the code. This is meant to be an introductory example, so I left the code very simple and easy to follow. I COULD use an annotation library for Vert.x to implement the endpoints in a manner similar to JAX-RS. In addition, we have gained a massive scalability improvement. Under the hood, Vert.x Web uses Netty for low-level asynchronous I/O operations, thus providing us the ability to handle MANY more concurrent requests (limited by the size of the database connection pool).

We’ve already made some improvement to the scalability and concurrency of this application by using the Vert.x Web library, but we can improve things a little more by implementing the Vert.x EventBus. By separating the database operations into Worker Verticles instead of using blockingHandler, we can handle request processing more efficiently. This is show in the Convert-To-Worker-Verticles branch. The application class has remained the same, but we have changed the CustomerEndpoints class and added a new class called CustomerWorker. In addition, we added a new library called Spring Vert.x Extension which provides Spring Dependency Injections support to Vert.x Verticles. Start off by looking at the new CustomerEndpoints class.


    @PostConstruct
    public void start() throws Exception {
        log.info("Successfully create CustomerVerticle");
        DeploymentOptions deployOpts = new DeploymentOptions().setWorker(true).setMultiThreaded(true).setInstances(4);
        vertx.deployVerticle("java-spring:com.zanclus.verticles.CustomerWorker", deployOpts, res -> {
            if (res.succeeded()) {
                Router router = Router.router(vertx);
                router.route().handler(BodyHandler.create());
                final DeliveryOptions opts = new DeliveryOptions()
                        .setSendTimeout(2000);
                router.get("/v1/customer/:id")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getCustomer")
                                    .addHeader("id", rc.request().getParam("id"));
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                router.put("/v1/customer")
                        .consumes("application/json")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "addCustomer");
                            vertx.eventBus().send("com.zanclus.customer", rc.getBodyAsJson(), opts, reply -> handleReply(reply, rc));
                        });
                router.get("/v1/customer")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getAllCustomers");
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                vertx.createHttpServer().requestHandler(router::accept).listen(8080);
            } else {
                log.error("Failed to deploy worker verticles.", res.cause());
            }
        });
    }

The routes are the same, but the implementation code is not. Instead of using calls to blockingHandler, we have now implemented proper async handlers which send out events on the event bus. None of the database processing is happening in this Verticle anymore. We have moved the database processing to a Worker Verticle which has multiple instances to handle multiple requests in parallel in a thread-safe manner. We are also registering a callback for when those events are replied to so that we can send the appropriate response to the client making the request. Now, in the CustomerWorker Verticle we have implemented the database logic and error handling.

@Override
public void start() throws Exception {
    vertx.eventBus().consumer("com.zanclus.customer").handler(this::handleDatabaseRequest);
}

public void handleDatabaseRequest(Message<Object> msg) {
    String method = msg.headers().get("method");

    DeliveryOptions opts = new DeliveryOptions();
    try {
        String retVal;
        switch (method) {
            case "getAllCustomers":
                retVal = mapper.writeValueAsString(dao.findAll());
                msg.reply(retVal, opts);
                break;
            case "getCustomer":
                Long id = Long.parseLong(msg.headers().get("id"));
                retVal = mapper.writeValueAsString(dao.findOne(id));
                msg.reply(retVal);
                break;
            case "addCustomer":
                retVal = mapper.writeValueAsString(
                                    dao.save(
                                            mapper.readValue(
                                                    ((JsonObject)msg.body()).encode(), Customer.class)));
                msg.reply(retVal);
                break;
            default:
                log.error("Invalid method '" + method + "'");
                opts.addHeader("error", "Invalid method '" + method + "'");
                msg.fail(1, "Invalid method");
        }
    } catch (IOException | NullPointerException e) {
        log.error("Problem parsing JSON data.", e);
        msg.fail(2, e.getLocalizedMessage());
    }
}

The CustomerWorker worker verticles register a consumer for messages on the event bus. The string which represents the address on the event bus is arbitrary, but it is recommended to use a reverse-tld style naming structure so that it is simple to ensure that the addresses are unique (“com.zanclus.customer”). Whenever a new message is sent to that address, it will be delivered to one, and only one, of the worker verticles. The worker verticle then calls handleDatabaseRequest to do the database work, JSON serialization, and error handling.

There you have it. You’ve seen that Vert.x can be integrated into your legacy applications to improve concurrency and efficiency without having to rewrite the entire application. We could have done something similar with an existing Google Guice or JavaEE CDI application. All of the business logic could remain relatively untouched while we tried in Vert.x to add reactive capabilities. The next steps are up to you. Some ideas for where to go next include Clustering, WebSockets, and VertxRx for ReactiveX sugar.

Doing microservices with micro-infra-spring

We’ve been working at 4financeit for last couple of months on some open source solutions for microservices. I will be publishing some articles related to microservices and our tools and this is the first of (hopefully) many that I will write in the upcoming weeks (months?) on Too much coding blog.

This article will be an introduction to the micro-infra-spring library showing how you can quickly set up a microservice using our tools.

Introduction 

Before you start it is crucial to remember that it’s not enough to just use our tools to have a microservice. You can check out my slides about microservices and issues that we have dealt with while adopting them at 4financeit.

4financeit microservices 12.2014 at Lodz JUG from Marcin Grzejszczak

Here you can find my video where I talk about microservice at 4finance (it’s from 19.09.2014 so it’s pretty outdated)


Also it’s worth checking out the articles of Martin Fowler about microservices, Todd Hoff’s – Microservices not a free lunch! or The Strengths and Weaknesses of Microservices by Abel Avram’s.

Is monolith bad?

No it isn’t! The most important thing to remember when starting with microservices is that it will complicate your life in terms of operations, metrics, deployment and testing. Of course it does bring plenty of benefits but if you are unsure of what to pick – monolith or microservices then my advice to use is to go the monolith way.

All the benefits of microservices like code autonomy, doing one thing well, getting ridd of pacakge dependencies can be also achieved in the monolithic code thus try to write your applications with such approaches and your life will get simpler for sure. How to achieve that? That’s complicated but here are a couple of hints that I can give you:

  • try to do DDD. No, you don’t have DDD when your entities have methods. Try to use concepts of aggregate roots
  • try not to make dependencies on packages from different roots. If you have two different bounded context like com.blogspot.toomuchcoding.client and com.blogspot.toomuchcoding.loan – go via tight cohesion and low coupling – emit events, call REST endpoint, send JMS messages or talk via strictly defined API. Do not reuse internals of those packages – take a look at the next point that deals with encapsulation
  • take your highscool notes and read about encapsulation again. Most of us make the mistake of thinking that if we make a field private and add an accessor to it then we have encapsulation. That’s not true! I really like the example of Slawek Sobotka (article in polish) who shows an example of common approach to encapsulation:

    human.getStomach().getBowls().getContent().add(new Sausage())

    instead of

    human.eat(new Sausauge())

  • add to your IDE class generation template that you want your new classes to be package scoped by default – what should be publicly available are interfaces and really limited number of classes
  • start doing what’s crucial in terms of tracking microservice requests and measuring business and technical data in your own application! Gather metrics, set up correlation ids for your messages, add service discovery if you have multiple monoliths.

I’m a hipster – I want microservices!

Let’s assume that you know what you are doing, you evaluated all pros and cons and you want to go down the microservice way. You have a devops culture there in your company and people are eager to start working on multiple codebases. How to start? Pick our tools and you won’t regret it 😉

Clone a repo and get to work

We have set up a working template on Github with UI – boot-microservice-gui and without it – boot-microservice. If you clone our repo and start working with it you get a service that:
  • uses micro-infra-spring library
  • is written in Groovy
  • uses Spring Boot
  • is built with Gradle (set up for 4finance – but that’s really easy to change)
  • is JDK8 compliant
  • contains an example of a business scenario
what you just have to do is:
  • check out the slides above to see our approach to microservices
  • remove the packages com/ofg/twitter from src/main and src/test
  • alter microservice.json to support your requirements
  • write your code!
Why should you use our repo?
  • you don’t have to set up anything – we’ve already done it for you
  • the time required to start developing a feature is close to zero

Aren’t we duplicating Spring Cloud?

In fact we’re not. We’re using it in our libraries ourselves (right now for property storage in Git repository). We have some different approaches to service discovery for instance but in general we are extending Spring Cloud’s features by:

Conclusions

If you want to go down the microservice way you have to be well aware of the issues related to that approach. If you know what you’re doing you can use our libraries and our microservice templates to have a fast start into feature development. 

What’s next

On my blog at toomuchcoding.blogspot.com I’ll write about different features of the micro-infra-spring library with more emphasis on configuration on specific features that are not that well known but equally cool as the rest 😉 Also I’ll write some articles on how we approached splitting the monolith but you’ll have to wait for that some time 😉

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Thread local storage in Java

One of the rarely known features among developers is Thread-local storage.  The idea is simple and need for it comes in  scenarios where we need data that is … well local for the thread. If we have two threads we that refer to the same global variable but we wanna them to have separate value independently initialized of each other.

Most major programming languages have implementation of the concept. For example C++11 has even the thread_local keyword, Ruby has chosen an API approach .

Java has also an implementation of the concept with  java.lang.ThreadLocal<T> and its subclass java.lang.InheritableThreadLocal<T> since version 1.2, so nothing new and shiny here.
Let’s say that for some reason we need to have an Long specific for our thread. Using Thread local that would simple be

public class ThreadLocalExample {

  public static class SomethingToRun implements Runnable {

    private ThreadLocal threadLocal = new ThreadLocal();

    @Override
    public void run() {
      System.out.println(Thread.currentThread().getName() + " " + threadLocal.get());

      try {
        Thread.sleep(2000);
      } catch (InterruptedException e) {
      }

      threadLocal.set(System.nanoTime());
      System.out.println(Thread.currentThread().getName() + " " + threadLocal.get());
    }
  }


  public static void main(String[] args) {
    SomethingToRun sharedRunnableInstance = new SomethingToRun();

    Thread thread1 = new Thread(sharedRunnableInstance);
    Thread thread2 = new Thread(sharedRunnableInstance);

    thread1.start();
    thread2.start();
  }

}
One possible sample run of the following code will result into :


Thread-0 null

Thread-0 132466384576241

Thread-1 null

Thread-1 132466394296347
At the beginning the value is set to null to both threads, obviously each of them works with separate values since after setting the value to System.nanoTime() on Thread-0 it will not have any effect on the value of Thread-1 exactly as we wanted, a thread scoped long variable.

One nice side effect is a case where the thread calls multiple methods from various classes. They will all be able to use the same thread scoped variable without major API changes. Since the value is not explicitly passed through one might argue it difficult to test and bad for design, but that is a separate topic altogether.

In what areas are popular frameworks using Thread Locals?

Spring being one of the most popular frameworks in Java uses ThreadLocals internally for many parts, easily shown by a simple github search. Most of the usages are related to the current’s user’s actions or information. This is actually one of the main uses for ThreadLocals in JavaEE world, storing information for the current request like in RequestContextHolder :


private static final ThreadLocal requestAttributesHolder = 
    new NamedThreadLocal<RequestAttributes>("Request attributes");
Or the current JDBC connection user credentials in UserCredentialsDataSourceAdapter.

If we get back on RequestContextHolder we can use this class to access all of the current request information for anywhere in our code.
Common use case for this is  LocaleContextHolder that helps us store the current user’s locale.
Mockito uses it to store the current “global” configuration and if we take a look at any framework out there there is a high chance we’ll find it as well.

Thread Locals and Memory Leaks

We learned this awesome little feature so let’s use it all over the place. We can do that but few google searches and we can find out that most out there say ThreadLocal is evil. That’s not exactly true, it is a nice utility but in some contexts it might be easy to create a memory leak.

“Can you cause unintended object retention with thread locals? Sure you can. But you can do this with arrays too. That doesn’t mean that thread locals (or arrays) are bad things. Merely that you have to use them with some care. The use of thread pools demands extreme care. Sloppy use of thread pools in combination with sloppy use of thread locals can cause unintended object retention, as has been noted in many places. But placing the blame on thread locals is unwarranted.” – Joshua Bloch

It is very easy to create a memory leak in your server code using ThreadLocal if it runs on an application server. ThreadLocal context is associated to the thread where it runs, and will be garbaged once the thread is dead. Modern app servers use pool of threads instead of creating new ones on each request meaning you can end up holding large objects indefinitely in your application.  Since the thread pool is from the app server our memory leak could remain even after we unload our application. The fix for this is simple, free up resources you do not need.

One other ThreadLocal misuse is API design. Often I have seen use of RequestContextHolder(that holds ThreadLocal) all over the place, like the DAO layer for example. Later on if one were to call the same DAO methods outside a request like and scheduler for example he would get a very bad surprise.
This create black magic and many maintenance developers who will eventually figure out where you live and pay you a visit. Even though the variables in ThreadLocal are local to the thread they are very much global in your code. Make sure you really need this thread scope before you use it.

More info on the topic

http://en.wikipedia.org/wiki/Thread-local_storage
http://www.appneta.com/blog/introduction-to-javas-threadlocal-storage/
https://plumbr.eu/blog/how-to-shoot-yourself-in-foot-with-threadlocals
http://stackoverflow.com/questions/817856/when-and-how-should-i-use-a-threadlocal-variable
https://plumbr.eu/blog/when-and-how-to-use-a-threadlocal
https://weblogs.java.net/blog/jjviana/archive/2010/06/09/dealing-glassfish-301-memory-leak-or-threadlocal-thread-pool-bad-ide
https://software.intel.com/en-us/articles/use-thread-local-storage-to-reduce-synchronization

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Creating a REST API with Spring Boot and MongoDB

Spring Boot is an opinionated framework that simplifies the development of Spring applications. It frees us from the slavery of complex configuration files and helps us to create standalone Spring applications that don’t need an external servlet container.
This sounds almost too good to be true, but Spring Boot can really do all this.
This blog post demonstrates how easy it is to implement a REST API that provides CRUD operations for todo entries that are saved to MongoDB database.
Let’s start by creating our Maven project.
Note: This blog post assumes that you have already installed the MongoDB database. If you haven’t done this, you can follow the instructions given in the blog post titled: Accessing Data with MongoDB.

Creating Our Maven Project

We can create our Maven project by following these steps:

  1. Use the spring-boot-starter-parent POM as the parent POM of our Maven project. This ensures that our project inherits sensible default settings from Spring Boot.
  2. Add the Spring Boot Maven Plugin to our project. This plugin allows us to package our application into an executable jar file, package it into a war archive, and run the application.
  3. Configure the dependencies of our project. We need to configure the following dependencies:
    • The spring-boot-starter-web dependency provides the dependencies of a web application.
    • The spring-data-mongodb dependency provides integration with the MongoDB document database.
  4. Enable the Java 8 Support of Spring Boot.
  5. Configure the main class of our application. This class is responsible of configuring and starting our application.

The relevant part of our pom.xml file looks as follows:

<properties>
<!-- Enable Java 8 -->
<java.version>1.8</java.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<!-- Configure the main class of our Spring Boot application -->
<start-class>com.javaadvent.bootrest.TodoAppConfig</start-class>
</properties>

<!-- Inherit defaults from Spring Boot -->
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.1.9.RELEASE</version>
</parent>

<dependencies>
<!-- Get the dependencies of a web application -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<!-- Spring Data MongoDB-->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
</dependency>
</dependencies>

<build>
<plugins>
<!-- Spring Boot Maven Support -->
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>


Additional Reading:

Let’s move on and find out how we can configure our application.

Configuring Our Application

We can configure our Spring Boot application by following these steps:

  1. Create a TodoAppConfig class to the com.javaadvent.bootrest package.
  2. Enable Spring Boot auto-configuration.
  3. Configure the Spring container to scan components found from the child packages of the com.javaadvent.bootrest package.
  4. Add the main() method to the TodoAppConfig class and implement by running our application.

The source code of the TodoAppConfig class looks as follows:

package com.javaadvent.bootrest;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableAutoConfiguration
@ComponentScan
public class TodoAppConfig {

public static void main(String[] args) {
SpringApplication.run(TodoAppConfig.class, args);
}
}


We have now created the configuration class that configures and runs our Spring Boot application. Because the MongoDB jars are found from the classpath, Spring Boot configures the MongoDB connection by using its default settings.
Additional Reading:

Let’s move on and implement our REST API.

Implementing Our REST API

We need implement a REST API that provides CRUD operations for todo entries. The requirements of our REST API are:

  • A POST request send to the url ‘/api/todo’ must create a new todo entry by using the information found from the request body and return the information of the created todo entry.
  • A DELETE request send to the url ‘/api/todo/{id}’ must delete the todo entry whose id is found from the url and return the information of the deleted todo entry.
  • A GET request send to the url ‘/api/todo’ must return all todo entries that are found from the database.
  • A GET request send to the url ‘/api/todo/{id}’ must return the information of the todo entry whose id is found from the url.
  • A PUT request send to the url ‘/api/todo/{id}’ must update the information of an existing todo entry by using the information found from the request body and return the information of the updated todo entry.

We can fulfill these requirements by following these steps:

  1. Create the entity that contains the information of a single todo entry.
  2. Create the repository that is used to save todo entries to MongoDB database and find todo entries from it.
  3. Create the service layer that is responsible of mapping DTOs into domain objects and vice versa. The purpose of our service layer is to isolate our domain model from the web layer.
  4. Create the controller class that processes HTTP requests and returns the correct response back to the client.

Note: This example is so simple that we could just inject our repository to our controller. However, because this is not a viable strategy when we are implementing real-life applications, we will add a service layer between the web and repository layers.
Let’s get started.

Creating the Entity

We need to create the entity class that contains the information of a single todo entry. We can do this by following these steps:

  1. Add the id, description, and title fields to the created entity class. Configure the id field of the entity by annotating the id field with the @Id annotation.
  2. Specify the constants (MAX_LENGTH_DESCRIPTION and MAX_LENGTH_TITLE) that specify the maximum length of the description and title fields.
  3. Add a static builder class to the entity class. This class is used to create new Todo objects.
  4. Add an update() method to the entity class. This method simply updates the title and description of the entity if valid values are given as method parameters.

The source code of the Todo class looks as follows:

import org.springframework.data.annotation.Id;

import static com.javaadvent.bootrest.util.PreCondition.isTrue;
import static com.javaadvent.bootrest.util.PreCondition.notEmpty;
import static com.javaadvent.bootrest.util.PreCondition.notNull;

final class Todo {

static final int MAX_LENGTH_DESCRIPTION = 500;
static final int MAX_LENGTH_TITLE = 100;

@Id
private String id;

private String description;

private String title;

public Todo() {}

private Todo(Builder builder) {
this.description = builder.description;
this.title = builder.title;
}

static Builder getBuilder() {
return new Builder();
}

//Other getters are omitted

public void update(String title, String description) {
checkTitleAndDescription(title, description);

this.title = title;
this.description = description;
}

/**
* We don't have to use the builder pattern here because the constructed
* class has only two String fields. However, I use the builder pattern
* in this example because it makes the code a bit easier to read.
*/
static class Builder {

private String description;

private String title;

private Builder() {}

Builder description(String description) {
this.description = description;
return this;
}

Builder title(String title) {
this.title = title;
return this;
}

Todo build() {
Todo build = new Todo(this);

build.checkTitleAndDescription(build.getTitle(), build.getDescription());

return build;
}
}

private void checkTitleAndDescription(String title, String description) {
notNull(title, "Title cannot be null");
notEmpty(title, "Title cannot be empty");
isTrue(title.length() <= MAX_LENGTH_TITLE,
"Title cannot be longer than %d characters",
MAX_LENGTH_TITLE
);

if (description != null) {
isTrue(description.length() <= MAX_LENGTH_DESCRIPTION,
"Description cannot be longer than %d characters",
MAX_LENGTH_DESCRIPTION
);
}
}
}


Additional Reading:

Let’s move on and create the repository that communicates with the MongoDB database.

Creating the Repository

We have to create the repository interface that is used to save Todo objects to MondoDB database and retrieve Todo objects from it.
If we don’t want to use the Java 8 support of Spring Data, we could create our repository by creating an interface that extends the CrudRepository<T, ID> interface. However, because we want to use the Java 8 support, we have to follow these steps:

  1. Create an interface that extends the Repository<T, ID> interface.
  2. Add the following repository methods to the created interface:
    1. The void delete(Todo deleted) method deletes the todo entry that is given as a method parameter.
    2. The List<Todo> findAll() method returns all todo entries that are found from the database.
    3. The Optional<Todo> findOne(String id) method returns the information of a single todo entry. If no todo entry is found, this method returns an empty Optional.
    4. The Todo save(Todo saved) method saves a new todo entry to the database and returns the the saved todo entry.

The source code of the TodoRepository interface looks as follows:

import org.springframework.data.repository.Repository;

import java.util.List;
import java.util.Optional;

interface TodoRepository extends Repository<Todo, String> {

void delete(Todo deleted);

List<Todo> findAll();

Optional<Todo> findOne(String id);

Todo save(Todo saved);
}


Additional Reading:

Let’s move on and create the service layer of our example application.

Creating the Service Layer

First, we have to create a service interface that provides CRUD operations for todo entries. The source code of the TodoService interface looks as follows:

import java.util.List;

interface TodoService {

TodoDTO create(TodoDTO todo);

TodoDTO delete(String id);

List<TodoDTO> findAll();

TodoDTO findById(String id);

TodoDTO update(TodoDTO todo);
}


The TodoDTO class is a DTO that contains the information of a single todo entry. We will talk more about it when we create the web layer of our example application.
Second, we have to implement the TodoService interface. We can do this by following these steps:

  1. Inject our repository to the service class by using constructor injection.
  2. Add a private Todo findTodoById(String id) method to the service class and implement it by either returning the found Todo object or throwing the TodoNotFoundException.
  3. Add a private TodoDTO convertToDTO(Todo model) method the service class and implement it by converting the Todo object into a TodoDTO object and returning the created object.
  4. Add a private List<TodoDTO> convertToDTOs(List<Todo> models) and implement it by converting the list of Todo objects into a list of TodoDTO objects and returning the created list.
  5. Implement the TodoDTO create(TodoDTO todo) method. This method creates a new Todo object, saves the created object to the MongoDB database, and returns the information of the created todo entry.
  6. Implement the TodoDTO delete(String id) method. This method finds the deleted Todo object, deletes it, and returns the information of the deleted todo entry. If no Todo object is found with the given id, this method throws the TodoNotFoundException.
  7. Implement the List<TodoDTO> findAll() method. This methods retrieves all Todo objects from the database, transforms them into a list of TodoDTO objects, and returns the created list.
  8. Implement the TodoDTO findById(String id) method. This method finds the Todo object from the database, converts it into a TodoDTO object, and returns the created TodoDTO object. If no todo entry is found, this method throws the TodoNotFoundException.
  9. Implement the TodoDTO update(TodoDTO todo) method. This method finds the updated Todo object from the database, updates its title and description, saves it, and returns the updated information. If the updated Todo object is not found, this method throws the TodoNotFoundException.

The source code of the MongoDBTodoService looks as follows:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;
import java.util.Optional;

import static java.util.stream.Collectors.toList;

@Service
final class MongoDBTodoService implements TodoService {

private final TodoRepository repository;

@Autowired
MongoDBTodoService(TodoRepository repository) {
this.repository = repository;
}

@Override
public TodoDTO create(TodoDTO todo) {
Todo persisted = Todo.getBuilder()
.title(todo.getTitle())
.description(todo.getDescription())
.build();
persisted = repository.save(persisted);
return convertToDTO(persisted);
}

@Override
public TodoDTO delete(String id) {
Todo deleted = findTodoById(id);
repository.delete(deleted);
return convertToDTO(deleted);
}

@Override
public List findAll() {
List todoEntries = repository.findAll();
return convertToDTOs(todoEntries);
}

private List convertToDTOs(List models) {
return models.stream()
.map(this::convertToDTO)
.collect(toList());
}

@Override
public TodoDTO findById(String id) {
Todo found = findTodoById(id);
return convertToDTO(found);
}

@Override
public TodoDTO update(TodoDTO todo) {
Todo updated = findTodoById(todo.getId());
updated.update(todo.getTitle(), todo.getDescription());
updated = repository.save(updated);
return convertToDTO(updated);
}

private Todo findTodoById(String id) {
Optional result = repository.findOne(id);
return result.orElseThrow(() -> new TodoNotFoundException(id));

}

private TodoDTO convertToDTO(Todo model) {
TodoDTO dto = new TodoDTO();

dto.setId(model.getId());
dto.setTitle(model.getTitle());
dto.setDescription(model.getDescription());

return dto;
}
}


We have now created the service layer of our example application. Let’s move on and create the controller class.

Creating the Controller Class

First, we need to create the DTO class that contains the information of a single todo entry and specifies the validation rules that are used to ensure that only valid information can be saved to the database. The source code of the TodoDTO class looks as follows:

import org.hibernate.validator.constraints.NotEmpty;

import javax.validation.constraints.Size;

public final class TodoDTO {

private String id;

@Size(max = Todo.MAX_LENGTH_DESCRIPTION)
private String description;

@NotEmpty
@Size(max = Todo.MAX_LENGTH_TITLE)
private String title;

//Constructor, getters, and setters are omitted
}


Additional Reading:

Second, we have to create the controller class that processes the HTTP requests send to our REST API and sends the correct response back to the client. We can do this by following these steps:

  1. Inject our service to our controller by using constructor injection.
  2. Add a create() method to our controller and implement it by following these steps:
    1. Read the information of the created todo entry from the request body.
    2. Validate the information of the created todo entry.
    3. Create a new todo entry and return the created todo entry. Set the response status to 201.
  3. Implement the delete() method by delegating the id of the deleted todo entry forward to our service and return the deleted todo entry.
  4. Implement the findAll() method by finding the todo entries from the database and returning the found todo entries.
  5. Implement the findById() method by finding the todo entry from the database and returning the found todo entry.
  6. Implement the update() method by following these steps:
    1. Read the information of the updated todo entry from the request body.
    2. Validate the information of the updated todo entry.
    3. Update the information of the todo entry and return the updated todo entry.
  7. Create an @ExceptionHandler method that sets the response status to 404 if the todo entry was not found (TodoNotFoundException was thrown).

The source code of the TodoController class looks as follows:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.bind.annotation.RestController;

import javax.validation.Valid;
import java.util.List;

@RestController
@RequestMapping("/api/todo")
final class TodoController {

private final TodoService service;

@Autowired
TodoController(TodoService service) {
this.service = service;
}

@RequestMapping(method = RequestMethod.POST)
@ResponseStatus(HttpStatus.CREATED)
TodoDTO create(@RequestBody @Valid TodoDTO todoEntry) {
return service.create(todoEntry);
}

@RequestMapping(value = "{id}", method = RequestMethod.DELETE)
TodoDTO delete(@PathVariable("id") String id) {
return service.delete(id);
}

@RequestMapping(method = RequestMethod.GET)
List<TodoDTO> findAll() {
return service.findAll();
}

@RequestMapping(value = "{id}", method = RequestMethod.GET)
TodoDTO findById(@PathVariable("id") String id) {
return service.findById(id);
}

@RequestMapping(value = "{id}", method = RequestMethod.PUT)
TodoDTO update(@RequestBody @Valid TodoDTO todoEntry) {
return service.update(todoEntry);
}

@ExceptionHandler
@ResponseStatus(HttpStatus.NOT_FOUND)
public void handleTodoNotFound(TodoNotFoundException ex) {
}
}


Note: If the validation fails, our REST API returns the validation errors as JSON and sets the response status to 400. If you want to know more about this, read a blog post titled: Spring from the Trenches: Adding Validation to a REST API.
That is it. We have now created a REST API that provides CRUD operations for todo entries and saves them to MongoDB database. Let’s summarize what we learned from this blog post.

Summary

This blog post has taught us three things:

  • We can get the required dependencies with Maven by declaring only two dependencies: spring-boot-starter-web and spring-data-mongodb.
  • If we are happy with the default configuration of Spring Boot, we can configure our web application by using its auto-configuration support and “dropping” new jars to the classpath.
  • We learned to create a simple REST API that saves information to MongoDB database and finds information from it.

You can get the example application of this blog post from Github.
This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Spring’s @Primary annotation in action

Spring is a framework that never stops to amaze me. It’s because of the fact that it offers plenty of different solutions that allow us, developers, to complete our tasks without writing millions of lines of code. Instead we are able to do the same in a much more readable, standardized manner. In this post I will try to describe one of its features that most likely is well known to all of you but in my opinion its importance is undervalued. The feature that I’ll be talking about is the @Primary annotation.

The problem

On a couple of projects that I was working on we have came accross a common business problem – we had a point of entry to a more complex logic – some container, that would gather the results of several other processors into a single output (something like map-filter-reduce functions from the functional programming). To some extent it resembled the Composite pattern. Putting it all together our approach was as follows:

  1. We had a container that had an autowired list of processors implementing a common interface
  2. Our container implemented the same interface as the elements of the autowired list
  3. We wanted the client class that would use the container to have this whole processing work transparent – he is interesed only in the result
  4. The processors have some logic (predicate) basing on which a processor is applicable to the current set of input data
  5. The results of the processing were then combined into a list and then reduced to a single output
There are numerous ways of dealing with this issue – I’ll present one that uses Spring with the @Primary annotation.

The solution

Let’s start with defining how our use case will fit to the aforementioned preconditions. Our set of data is a Person class that looks as follows:

Person.java

package com.blogspot.toomuchcoding.person.domain;

public final class Person {
private final String name;
private final int age;
private final boolean stupid;

public Person(String name, int age, boolean stupid) {
this.name = name;
this.age = age;
this.stupid = stupid;
}

public String getName() {
return name;
}

public int getAge() {
return age;
}

public boolean isStupid() {
return stupid;
}
}

Nothing out of the ordinary. Now let us define the contract:

PersonProcessingService.java

package com.blogspot.toomuchcoding.person.service;

import com.blogspot.toomuchcoding.person.domain.Person;

public interface PersonProcessingService {
boolean isApplicableFor(Person person);
String process(Person person);
}

As stated in the preconditions each implementaiton of the PersonProcessingService has to define two points of the contract :

  1. whether it is applicable for the current Person 
  2. how it processess a Person.

Now let’s take a look at some of the Processors that we have – I’ll not post the code here cause it’s pointless – you can check out the code later on Github or on Bitbucket. We have the following @Component annotated implementations of PersonProcessingService:

  • AgePersonProcessingService
    • is applicable if Person’s age is greater or equal 18
    • returns a String containing “AGE” as processing takes place – that’s kind of silly but it’s just a demo right? 🙂
  • IntelligencePersonProcessingService
    • is applicable if a Person is stupid
    • returns a String containing “STUPID” as processing takes place
  • NamePersonProcessingService
    • is applicable if a Person has a name
    • returns a String containing “NAME” as processing takes place
The logic is fairly simple. Now our container of PersonProcessingServices would want to iterate for a given Person over the processors, check if the current processor is applicable (filter) and if that is the case add the String that is a result of processing of a Person to the list of responses (map – a function converting a Person to a String) and finaly join those responses by a comma (reduce). Let’s check it out how it’s done:

PersonProcessingServiceContainer.java

package com.blogspot.toomuchcoding.person.service;

import java.util.ArrayList;
import java.util.List;

import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Primary;
import org.springframework.stereotype.Component;

import com.blogspot.toomuchcoding.person.domain.Person;

@Component
@Primary
class PersonProcessingServiceContainer implements PersonProcessingService {

private static final Logger LOGGER = LoggerFactory.getLogger(PersonProcessingServiceContainer.class);

@Autowired
private List<PersonProcessingService> personProcessingServices = new ArrayList<PersonProcessingService>();

@Override
public boolean isApplicableFor(Person person) {
return person != null;
}

@Override
public String process(Person person) {
List<String> output = new ArrayList<String>();
for(PersonProcessingService personProcessingService : personProcessingServices){
if(personProcessingService.isApplicableFor(person)){
output.add(personProcessingService.process(person));
}
}
String result = StringUtils.join(output, ",");
LOGGER.info(result);
return result;
}

public List<PersonProcessingService> getPersonProcessingServices() {
return personProcessingServices;
}
}


As you can see we have a container that is annotated with @Primary which means that if an implementation of the PersonProcessingService will have to be injected then Spring will pick the PersonProcessingServiceContainer to be injected. The cool thing is that we have an autowired list of PersonProcessingServices which means that all other implementations of that interface will get autowired there (the container will not autowire itself to the list!).

Now let’s check out the Spock tests that prove that I’m not telling any lies. If you aren’t using Spock already in your project then you should move it straight away 🙂

PersonProcessingServiceContainerIntegrationSpec.groovy

package com.blogspot.toomuchcoding.person.service
import com.blogspot.toomuchcoding.configuration.SpringConfiguration
import com.blogspot.toomuchcoding.person.domain.Person
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.test.context.ContextConfiguration
import spock.lang.Specification
import spock.lang.Unroll

import static org.hamcrest.CoreMatchers.notNullValue

@ContextConfiguration(classes = [SpringConfiguration])
class PersonProcessingServiceContainerIntegrationSpec extends Specification {

@Autowired
PersonProcessingService personProcessingService

def "should autowire container even though there are many implementations of service"(){
expect:
personProcessingService instanceof PersonProcessingServiceContainer
}

def "the autowired container should not have itself in the list of autowired services"(){
expect:
personProcessingService instanceof PersonProcessingServiceContainer
and:
!(personProcessingService as PersonProcessingServiceContainer).personProcessingServices.findResult {
it instanceof PersonProcessingServiceContainer
}
}

def "should not be applicable for processing if a person doesn't exist"(){
given:
Person person = null
expect:
!personProcessingService.isApplicableFor(person)
}

def "should return an empty result for a person not applicable for anything"(){
given:
Person person = new Person("", 17, false)
when:
def result = personProcessingService.process(person)
then:
result notNullValue()
result.isEmpty()
}

@Unroll("For name [#name], age [#age] and being stupid [#stupid] the result should contain keywords #keywords")
def "should perform different processing depending on input"(){
given:
Person person = new Person(name, age, stupid)
when:
def result = personProcessingService.process(person)
then:
keywords.every {
result.contains(it)
}
where:
name | age | stupid || keywords
"jan" | 20 | true || ['NAME', 'AGE', 'STUPID']
"" | 20 | true || ['AGE', 'STUPID']
"" | 20 | false || ['AGE']
null | 17 | true || ['STUPID']
"jan" | 17 | true || ['NAME']
}
}

The tests are pretty straight forward:

  1. We prove that the autowired field is in fact our container – the PersonProcessingServiceContainer.
  2. Then we show that we can’t find an object in the collection of autowired implementations of the PersonProcessingService, that is of PersonProcessingServiceContainer type
  3. In the next two tests we prove that the logic behind our processors is working
  4. Last but not least is the Spock’s finest – the where clause that allows us create beautiful paramterized tests.

Per module feature

Imagine the situation in which you have an implementation of the interface that is defined in your core module.

@Component
class CoreModuleClass implements SomeInterface {
...
}

What if you decide in your other module that has the dependence to the core module that you don’t want to use this CoreModuleClass and want to have some custom logic wherever the SomeInterface is autowired? Well – use @Primary!

@Component
@Primary
class CountryModuleClass implements SomeInterface {
...
}

In that way you are sure that wherever the SomeInterface has to be autowired it will be your CountryModuleClass that will be injected in the field.

Conclusion

In this post you could see how to

  • use the @Primary annotation to create a composite like container of interface implementations
  • use the @Primary annotation to provide a per module implementation of the interface that will take precedence over other @Components in terms of autowiring
  • write cool Spock tests 🙂

The code

You can find the code presented here on Too Much Coding’s Github repository or on Too Much Coding’s Bitbucket repository.