Decorator Design Pattern using lambdas

With the advent of lambdas in Java we now have a new tool to better design our code. Of course the first step is using streams, method references and other neat features introduced in Java 8.

Going forward I think the next step is to revisit the well established Design Patterns and see them through the functional programming lenses. For this purpose I’ll take the Decorator Pattern and implement it using lambdas.

We’ll take an easy and delicious example of the Decorator Pattern: adding toppings to pizza. Here is the standard implementation as suggested by GoF:

First we have the interface that defines our component:

public interface Pizza {
    String bakePizza();
}

We have a concrete component:

public class BasicPizza implements Pizza {
    @Override
    public String bakePizza() {
        return "Basic Pizza";
    }
}

We decide that we have to decorate our component in different ways. We go with Decorator Pattern. This is the abstract decorator:

public abstract class PizzaDecorator implements Pizza {
    private final Pizza pizza;
    
    protected PizzaDecorator(Pizza pizza) {
        this.pizza = pizza;
    }

    @Override
    public String bakePizza() {
        return pizza.bakePizza();
    }
}

we provide some concrete decorators for the component:

public class ChickenTikkaPizza extends PizzaDecorator {
    protected ChickenTikkaPizza(Pizza pizza) {
        super(pizza);
    }

    @Override
    public String bakePizza() {
        return super.bakePizza() + " with chicken topping";
    }
}

public class ProsciuttoPizza extends PizzaDecorator {

    protected ProsciuttoPizza(Pizza pizza) {
        super(pizza);
    }

    @Override
    public String bakePizza() {
        return super.bakePizza() + " with prosciutto";
    }
}

and this is the way to use the new structure:

Pizza pizza = new ChickenTikkaPizza(new BasicPizza());
String finishedPizza = pizza.bakePizza();   //Basic Pizza with chicken topping

pizza = new ChickenTikkaPizza(new ProsciuttoPizza(new BasicPizza()));
finishedPizza  = pizza.bakePizza();  //Basic Pizza with prosciutto with chicken topping

we can see that this can get very messy, and it did get very messy if we think about how we handle buffered readers in java:

new DataInputStream(new BufferedInputStream(new FileInputStream(new File("myfile.txt"))))

of course, you can split that in multiple lines, but that won’t solve the messiness, it will just spread it.
Now lets see how we can do the same thing using lambdas.
We start with the same basic component objects:

public interface Pizza {
    String bakePizza();
}

public class BasicPizza implements Pizza {
    @Override
    public String bakePizza() {
        return "Basic Pizza";
    }
}

But now instead of declaring an abstract class that will provide the template for decorations, we will create the decorator that asks the user for functions that will decorate the component.

public class PizzaDecorator {
    private final Function<Pizza, Pizza> toppings;

    private PizzaDecorator(Function<Pizza, Pizza>... desiredToppings) {
        this.toppings = Stream.of(desiredToppings)
                .reduce(Function.identity(), Function::andThen);

    }

    
    public static String bakePizza(Pizza pizza, Function<Pizza, Pizza>... desiredToppings) {
        return new PizzaDecorator(desiredToppings).bakePizza(pizza);
    }

private String bakePizza(Pizza pizza) {
    return this.toppings.apply(pizza).bakePizza();
}

}

There is this line that constructs the chain of decorations to be applied:

Stream.of(desiredToppings).reduce(identity(), Function::andThen);

This line of code will take your decorations (which are of Function type) and chain them using andThen. This is the same as

(currentToppings, nextTopping) -> currentToppings.andThen(nextTopping)

and it sure that the functions are called subsequently in the order you provided.
Also Function.identity() is translated to elem -> elem lambda expression.

Ok, now where we’ll we define our decorations? You can add them as static methods in PizzaDecorator or even in the interface:

public interface Pizza {
    String bakePizza();

    static Pizza withChickenTikka(Pizza pizza) {
        return new Pizza() {
            @Override
            public String bakePizza() {
                return pizza.bakePizza() + " with chicken";
            }
        };
    }

    static Pizza withProsciutto(Pizza pizza) {
        return new Pizza() {
            @Override
            public String bakePizza() {
                return pizza.bakePizza() + " with prosciutto";
            }
        };
    }
}

And now, this is how this pattern gets to be used:

String finishedPizza = PizzaDecorator.bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

//And if you static import PizzaDecorator.bakePizza:

String finishedPizza  = bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto);

As you can see, the code got more clear and more concise, and we didn’t use inheritance to build our decorators.

This is just one of the many design pattern that can be improved using lambdas. There are more features that can be used to improve the rest of them like using partial application (currying) to implement Adapter Pattern.

I hope I got you thinking about adopting a more functional programming approach to your development style.

UPDATE: Here you can find a video walkthrough of this article, created by my friends from Webucator:

If you want to see more of their tutorials you can visit Webucator site

Bibliography:

The decorator example was inspired by Gang of Four – Decorate with Decorator Design Pattern article

The refactoring method was inspired by the following Devoxx 2015 talks (which I recommend watching as they treat the subject at large):
Design Pattern Reloaded by Remi Forax
Design Patterns in the Light of Lambda Expressions by Venkat Subramaniam

 

Effective UI tests with Selenide

Waiting for miracles

Christmas is a time for miracles. On the eve of the new year we all build plans for the next. And we hope that all problems will leave in the ending year, and a miracle happens in the coming year.

Every Java developer dreams about a miracle that lets him become The Most Effective Java Developer in the world.

I want to show you such a miracle.

It’s called automated tests!

Ugh, tests?

Yes. You will not become a real master thanks to micro/pico/nano services. You will become a real master thanks to discipline. Discipline claiming that developer only then reports jobs as donewhen code and tests are written and run.

But, isn’t testing boring?

Oh no, believe me! Writing of fast and stable automated tests is a great challenge for smartest heads. And it can be very fun and interesting. You only need to use right tools.

The right tool for writing UI tests is:

Selenide

Selenide is an open-source library for writing concise and stable UI tests.

Selenide is an ideal choice for software developers because it has a very low learning curve. Thus, you don’t need to bother with browser details, all these typical ajax and time issues that eat most of QA automation engineers’ time.

Let’s look at a simplest Selenide test:

public class GoogleTest {
  @Test
  public void user_can_search_everything_in_google() {
    open("http://google.com/ncr");
    $(By.name("q")).val("selenide").pressEnter();

    $$("#ires .g").shouldHave(size(10));

    $("#ires .g").shouldBe(visible).shouldHave(
        text("Selenide: concise UI tests in Java"),
        text("selenide.org"));
  }
}

Let’s look closer what happens here.

  • You open a browser with just one command open(url)
  • You find an element on a page with command $.
    You can find element by name, ID, CSS selector, attributes, xpath and even by text.
  • You manipulate the element: enter some text with val() and press enter with (surprise-surprise!) pressEnter().
  • You check the results: find all found results with $$ (it returns a collection of all matched elements). You check the size and content of the collection.

Isn’t this test easy to read? Isn’t this test easy to write?

I believe it is.

Deeper into details

Ajax/timing problems

Nowdays web applications are dynamic. Every single piece of application can be rendered/changed dynamically at any moment. This creates a lot of problems for automated tests. Test that is green today can suddenly become red at any moment, just because browser executed some javascript a little bit longer than usual.

It’s a real pain in the ajjaxx.

Quite unbelievably, but Selenide resolves most of the these problems in a very simple way.

Simply said, every Selenide method waits a little bit if needed. People call it “smart waiting”.

When you write

$("#menu").shouldHave(text("Hello"));

Selenide checks if the element exists and contains text “Hello”.

If not yet, Selenide assumes that probably the element will be updated dynamically soon, and waits a little bit until it happens. The default timeout is 4 seconds, which is typically enough for most web applications. And of course, it’s configurable.

Rich set of matchers

You can check pretty much everything with Selenide. Using “smart waiting” mechanism mentioned above.

For example, you can check if element exists. If not yet, Selenide will wait up to 4 seconds.

$(".loading_progress").shouldBe(visible);

You can even check that element does not exist. If it still exists, Selenide will wait up to 4 seconds until it disappears.

$(By.name("gender")).should(disappear);

And you can use fluent API and chain methods to make your tests really concise:

$("#menu")
  .shouldHave(text("Hello"), text("John!"))
  .shouldBe(enabled, selected);

Collections

Selenide allows you to work with collections, thus checking a lot of elements with one line of code.

For example, you can check that there are exactly N elements on a page:

$$(".error").shouldHave(size(3));

You can find subset of collections:

$$("#employees tbody tr")
  .filter(visible)
  .shouldHave(size(4));

You can check texts of elements. In most cases, it’s sufficient to check the whole table or table row:

$$("#employees tbody tr").shouldHave(
  texts(
      "John Belushi",
      "Bruce Willis",
      "John Malkovich"
  )
);

Upload/download files

It’s pretty easy to upload a file with Selenide:

$("#cv").uploadFile(new File("cv.doc"));

You can even upload multiple files at once:

$("#cv").uploadFile(
  new File("cv1.doc"),
  new File("cv2.doc"),
  new File("cv3.doc")
);

And it’s unbelievably simple to download a file:

File pdf = $(".btn#cv").download();

Testing “highly dynamic” web applications

Some web frameworks (e.g. GWT) generate HTML that is absolutely unreadable. Elements do not have constant IDs or names.

It’s a real pain in the xpathh.

Selenide suggests to resolve this problem by searching elements by text.

import static com.codeborne.selenide.Selectors.*;

$(byText("Hello, Devoxx!"))     // find by the whole text
   .shouldBe(visible);

$(withText("oxx"))              // find by substring
   .shouldHave(text("Hello, Devoxx!"));

Searching by text is not bad idea at all. In fact, I like it because it emulates behaviour of real user. Real user doesn’t find buttons by ID or XPATH – he finds by text (or, well, color).

Another useful set of Selenide methods allows you to navigate between parents and children.

$("td").parent()
$("td").closest("tr")
$(".btn").closest(".modal")
$("div").find(By.name("q"))

For example, you can find a table cell by text, then by its closest tr descendant and find a “Save” button inside this table row:

$("table#employees")
  .find(byText("Joshua"))
  .closest("tr.employee")
  .find(byValue("Save"))
  .click();

… And many other functions

Selenide has many more functions, like:

$("div").scrollTo();
$("div").innerText();
$("div").innerHtml();
$("div").exists();
$("select").isImage();
$("select").getSelectedText();
$("select").getSelectedValue();
$("div").doubleClick();
$("div").contextClick();
$("div").hover();
$("div").dragAndDrop()
zoom(2.5);
...

but the good news is that you don’t need to remember all this stuff. Just put $, put dot and choose from available options suggested by your IDE.

Use the power of IDE! Concentrate on business logic.

Power of IDE

Make the world better

I believe the World will get better when all developers start writing automated tests for their code. When developers will get up at 17:00 and go to their children without fearing that they broke something with last changes.

Let’s make the world better by writing automated tests!

Deliver working software.

Andrei Solntsev

selenide.org

An introduction to Spark, your next REST Framework for Java

I hope you’re having a great Java Advent this year! Today we’re going to look into a refreshing, simple, nice and pragmatic framework for writing REST applications in Java. It will be so simple, it won’t even seem like Java at all.

We’re going to look into the Spark web framework. No, it’s not related to Apache Spark. Yes, it’s unfortunate that they share the same name.

I think the best way to understand this framework is to build a simple application, so we’ll build a simple service to perform mathematical operations.

We could use it like this:

spark1

Note that the service is running on localhost at port 4567 and the resource requested is “/10/add/8”.

Set up the Project Using Gradle (what’s Gradle?)

apply plugin: "java"
apply plugin: "idea"

sourceCompatibility = 1.8

repositories {
    mavenCentral()
    maven { url "https://oss.sonatype.org/content/repositories/snapshots/" }
    maven { url "https://oss.sonatype.org/content/repositories/releases/" }     
}

dependencies {
    compile "com.javaslang:javaslang:2.0.0-RC1"
    compile "com.sparkjava:spark-core:2.3"
    compile "com.google.guava:guava:19.0-rc2"
    compile "org.projectlombok:lombok:1.16.6"
    testCompile group: 'junit', name: 'junit', version: '4.+'
}

task launch(type:JavaExec) {
    main = "me.tomassetti.javaadvent.SparkService"
    classpath = sourceSets.main.runtimeClasspath
}

Now we can run:

  • ./gradlew idea to generate an IntelliJ IDEA project
  • ./gradlew test to run tests
  • ./gradlew assemble to build the project
  • ./gradlew launch to start our service

Great. Now, Let’s Meet Spark

Do you think we can write a fully functional web service that performs basic mathematical operation in less than 25 lines of Java code? No way? Well, think again:

// imports omitted

class Calculator implements Route {

    private Map<String, Function2<Long, Long, Long>> functions = ImmutableMap.of(
            "add", (a, b) -> a + b,
            "mul", (a, b) -> a * b,
            "div", (a, b) -> a / b,
            "sub", (a, b) -> a - b);

    @Override
    public Object handle(Request request, Response response) throws Exception {
        long left = Long.parseLong(request.params(":left"));
        String operatorName = request.params(":operator");
        long right = Long.parseLong(request.params(":right"));
        return functions.get(operatorName).apply(left, right);
    }
}

public class SparkService {
    public static void main(String[] args) {
        get("/:left/:operator/:right", new Calculator());
    }
}

In our main method we just say that when we get a request which contains three parts (separated by slashes) we should use the Calculator route, which is our only route. A route in Spark is the unit which takes a request, processes it, and produces a response.

Our calculator is where the magic happens. It looks in the request for the paramters “left”, “operatorName” and “right”. Left and right are parsed as long values, while the operatorName is used to find the operation. For each operation we have a Function (Function2<Long, Long>) which we then apply to our values (left and right). Cool, eh?

Function2 is an interface which comes from the Javaslang project.

You can now start the service (./gradlew launch, remember?) and play around.

The last time I checked Java was more verbose, redundant, slow… well, it is healing now.

Ok, but what about tests?

So Java can actually be quite concise, and as a Software Engineer I celebrate that for a minute or two, but shortly after I start to feel uneasy… this stuff has no tests! Worse than that, it doesn’t look testable at all. The logic is in our calculator class, but it takes a Request and produces a Response. I don’t want to instantiate a Request just to check if my Calculator works as intended. Let’s refactor a little:

class TestableCalculator implements Route {

    private Map<String, Function2<Long, Long, Long>> functions = ImmutableMap.of(
            "add", (a, b) -> a + b,
            "mul", (a, b) -> a * b,
            "div", (a, b) -> a / b,
            "sub", (a, b) -> a - b);

    public long calculate(String operatorName, long left, long right) {
        return functions.get(operatorName).apply(left, right);
    }

    @Override
    public Object handle(Request request, Response response) throws Exception {
        long left = Long.parseLong(request.params(":left"));
        String operatorName = request.params(":operator");
        long right = Long.parseLong(request.params(":right"));
        return calculate(operatorName, left, right);
    }
}

We just separate the plumbing (taking the values out of the request) from the logic and put it in its own method: calculate. Now we can test calculate.

public class TestableLogicCalculatorTest {

    @Test
    public void testLogic() {
        assertEquals(10, new TestableCalculator().calculate("add", 3, 7));
        assertEquals(-6, new TestableCalculator().calculate("sub", 7, 13));
        assertEquals(3, new TestableCalculator().calculate("mul", 3, 1));
        assertEquals(0, new TestableCalculator().calculate("div", 0, 7));
    }

    @Test(expected = ArithmeticException.class)
    public void testInvalidInputs() {
        assertEquals(0, new TestableCalculator().calculate("div", 0, 0));
    }

}

I feel better now: our tests prove that this stuff works. Sure, it will throw an exception if we try to divide by zero, but that’s how it is.

What does that mean for the user, though?

spark2

It means this: a 500. And what happens if the user tries to use an operation which does not exist?

spark3

What if the values are not proper numbers?

spark4

Ok, this doesn’t seem very professional. Let’s fix it.

Error handling, functional style

To fix two of the cases we just have to use one feature of Spark: we can match specific exceptions to specific routes. Our routes will produce a meaningful HTTP status code and a proper message.

public class SparkService {
    public static void main(String[] args) {
        exception(NumberFormatException.class, (e, req, res) -> res.status(404));
        exception(ArithmeticException.class, (e, req, res) -> {
            res.status(400);
            res.body("This does not seem like a good idea");
        });
        get("/:left/:operator/:right", new ReallyTestableCalculator());
    }
}

We have still to handle the case of a non-existent operation, and this is something we are going to do in ReallyTestableCalculator.

To do so we’ll use a typical function pattern: we’ll return an EitherAn Either is a collection which can have either a left or a right value. The left typically represents some sort of information about an error, like an error code or an error message. If nothing goes wrong the Either will contain a right value, which could be all sort of stuff. In our case we will return an Error (a class we defined) if the operation cannot be executed, otherwise we will return the result of the operation in a Long. So we will return an Either<Error, Long>.

package me.tomassetti.javaadvent.calculators;

import javaslang.Function2;
import javaslang.Tuple2;
import javaslang.collection.Map;
import javaslang.collection.HashMap;
import javaslang.control.Either;
import spark.Request;
import spark.Response;
import spark.Route;

public class ReallyTestableCalculator implements Route {
    
    private static final int NOT_FOUND = 404;

    private Map<String, Function2<Long, Long, Long>> functions = HashMap.ofAll(
            new Tuple2<>("add", (a, b) -> a + b),
            new Tuple2<>("mul", (a, b) -> a * b),
            new Tuple2<>("div", (a, b) -> a / b),
            new Tuple2<>("sub", (a, b) -> a - b));

    public Either<Error, Long> calculate(String operatorName, long left, long right) {
        Either<Error, Long> unknownOp = Either.<Error, Long>left(new Error(NOT_FOUND, "Unknown math operation"));
        return functions.get(operatorName).map(f -> Either.<Error, Long>right(f.apply(left, right)))
                .orElse(unknownOp);
    }

    @Override
    public Object handle(Request request, Response response) throws Exception {
        long left = Long.parseLong(request.params(":left"));
        String operatorName = request.params(":operator");
        long right = Long.parseLong(request.params(":right"));
        Either<Error, Long> res =  calculate(operatorName, left, right);
        if (res.isRight()) {
            return res.get();
        } else {
            response.status(res.left().get().getHttpCode());
            return null;
        }
    }
}

Let’s test this:

package me.tomassetti.javaadvent;

import javaslang.control.Either;
import me.tomassetti.javaadvent.calculators.ReallyTestableCalculator;
import org.junit.Test;

import static org.junit.Assert.assertEquals;

public class ReallyTestableLogicCalculatorTest {

    @Test
    public void testLogic() {
        assertEquals(Either.right(10L), new ReallyTestableCalculator().calculate("add", 3, 7));
        assertEquals(Either.right(-6L), new ReallyTestableCalculator().calculate("sub", 7, 13));
        assertEquals(Either.right(3L), new ReallyTestableCalculator().calculate("mul", 3, 1));
        assertEquals(Either.right(0L), new ReallyTestableCalculator().calculate("div", 0, 7));
    }

    @Test(expected = ArithmeticException.class)
    public void testInvalidOperation() {
        Either<me.tomassetti.javaadvent.calculators.Error, Long> res = new ReallyTestableCalculator().calculate("div", 0, 0);
        assertEquals(true, res.isLeft());
        assertEquals(400, res.left().get().getHttpCode());
    }

    @Test
    public void testUnknownOperation() {
        Either<me.tomassetti.javaadvent.calculators.Error, Long> res = new ReallyTestableCalculator().calculate("foo", 0, 0);
        assertEquals(true, res.isLeft());
        assertEquals(404, res.left().get().getHttpCode());
    }

}

The result

We got a service that can be easily tested. It performs mathematical operations. It supports the four basic operations, but it could be easily extended to support more. Errors are handled and the appropriate HTTP codes are used: 400 for bad inputs and 404 for unknown operations or values.

Conclusions

When I first saw Java 8 I was happy about the new features, but not very excited. However, after a few months I am seeing new frameworks come up which are based on these new features and have the potential to really change how we program in Java. Stuff like Spark and Javaslang is making the difference. I think that now Java can remain simple and solid while becoming much more agile and productive.

You can find many more tutorials either on the Spark tutorials website or on my blog tomassetti.me .

Reactive Development Using Vert.x

Lately, it seems like we’re hearing about the latest and greatest frameworks for Java. Tools like Ninja, SparkJava, and Play; but each one is opinionated and make you feel like you need to redesign your entire application to make use of their wonderful features. That’s why I was so relieved when I discovered Vert.x. Vert.x isn’t a framework, it’s a toolkit and it’s un-opinionated and it’s liberating. Vert.x doesn’t want you to redesign your entire application to make use of it, it just wants to make your life easier. Can you write your entire application in Vert.x? Sure! Can you add Vert.x capabilities to your existing Spring/Guice/CDI applications? Yep! Can you use Vert.x inside of your existing JavaEE applications? Absolutely! And that’s what makes it amazing.

Background

Vert.x was born when Tim Fox decided that he liked a lot of what was being developed in the NodeJS ecosystem, but he didn’t like some of the trade-offs of working in V8: Single-threadedness, limited library support, and JavaScript itself. Tim set out to write a toolkit which was unopinionated about how and where it is used, and he decided that the best place to implement it was on the JVM. So, Tim and the community set out to create an event-driven, non-blocking, reactive toolkit which in many ways mirrored what could be done in NodeJS, but also took advantage of the power available inside of the JVM. Node.x was born and it later progressed to become Vert.x.

Overview

Vert.x is designed to implement an event bus which is how different parts of the application can communicate in a non-blocking/thread safe manner. Parts of it were modeled after the Actor methodology exhibited by Eralng and Akka. It is also designed to take full advantage of today’s multi-core processors and highly concurrent programming demands. As such, by default, all Vert.x VERTICLES are implemented as single-threaded by default. Unlike NodeJS though, Vert.x can run MANY verticles in MANY threads. Additionally, you can specify that some verticles are “worker” verticles and CAN be multi-threaded. And to really add some icing on the cake, Vert.x has low level support for multi-node clustering of the event bus via the use of Hazelcast. It has gone on to include many other amazing features which are too numerous to list here, but you can read more in the official Vert.x docs.

The first thing you need to know about Vert.x is, similar to NodeJS, never block the current thread. Everything in Vert.x is set up, by default, to use callbacks/futures/promises. Instead of doing synchronous operations, Vert.x provides async methods for doing most I/O and processor intensive operations which might block the current thread. Now, callbacks can be ugly and painful to work with, so Vert.x optionally provides an API based on RxJava which implements the same functionality using the Observer pattern. Finally, Vert.x makes it easy to use your existing classes and methods by providing the executeBlocking(Function f) method on many of it’s asynchronous APIs. This means you can choose how you prefer to work with Vert.x instead of the toolkit dictating to you how it must be used.

The second thing to know about Vert.x is that it composed of verticles, modules, and nodes. Verticles are the smallest unit of logic in Vert.x, and are usually represented by a single class. Verticles should be simple and single-purpose following the UNIX Philosophy. A group of verticles can be put together into a module, which is usually packaged as a single JAR file. A module represents a group of related functionality which when taken together could represent an entire application or just a portion of a larger distributed application. Lastly, nodes are single instances of the JVM which are running one or more modules/verticles. Because Vert.x has clustering built-in from the ground up, Vert.x applications can span nodes either on a single machine or across multiple machines in multiple geographic locations (though latency can hider performance).

Example Project

Now, I’ve been to a number of Meetups and conferences lately where the first thing they show you when talking about reactive programming is to build a chat room application. That’s all well and good, but it doesn’t really help you to completely understand the power of reactive development. Chat room apps are simple and simplistic. We can do better. In this tutorial, we’re going to take a legacy Spring application and convert it to take advantage of Vert.x. This has multiple purposes: It shows that the toolkit is easy to integrate with existing Java projects, it allows us to take advantage of existing tools which may be entrenched parts of our ecosystem, and it also lets us follow the DRY principle in that we don’t have to rewrite large swathes of code to get the benefits of Vert.x.

Our legacy Spring application is a contrived simple example of a REST API using Spring Boot, Spring Data JPA, and Spring REST. The source code can be found in the “master” branch HERE. There are other branches which we will use to demonstrate the progression as we go, so it should be simple for anyone with a little experience with git and Java 8 to follow along. Let’s start by examining the Spring Configuration class for the stock Spring application.


@SpringBootApplication
@EnableJpaRepositories
@EnableTransactionManagement
@Slf4j
public class Application {
    public static void main(String[] args) {
        ApplicationContext ctx = SpringApplication.run(Application.class, args);

        System.out.println("Let's inspect the beans provided by Spring Boot:");

        String[] beanNames = ctx.getBeanDefinitionNames();
        Arrays.sort(beanNames);
        for (String beanName : beanNames) {
            System.out.println(beanName);
        }
    }

    @Bean
    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();
    }

    @Bean
    public EntityManagerFactory entityManagerFactory() {
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        vendorAdapter.setGenerateDdl(true);

        LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
        factory.setJpaVendorAdapter(vendorAdapter);
        factory.setPackagesToScan("com.zanclus.data.entities");
        factory.setDataSource(dataSource());
        factory.afterPropertiesSet();

        return factory.getObject();
    }

    @Bean
    public PlatformTransactionManager transactionManager(final EntityManagerFactory emf) {
        final JpaTransactionManager txManager = new JpaTransactionManager();
        txManager.setEntityManagerFactory(emf);
        return txManager;
    }
}

As you can see at the top of the class, we have some pretty standard Spring Boot annotations. You’ll also see an @Slf4j annotation which is part of the lombok library, which is designed to help reduce boiler-plate code. We also have @Bean annotated methods for providing access to the JPA EntityManager, the TransactionManager, and DataSource. Each of these items provide injectable objects for the other classes to use. The remaining classes in the project are similarly simplistic. There is a Customer POJO which is the Entity type used in the service. There is a CustomerDAO which is created via Spring Data. Finally, there is a CustomerEndpoints class which is the JAX-RS annotated REST controller.

As explained earlier, this is all standard fare in a Spring Boot application. The problem with this application is that for the most part, it has limited scalability. You would either run this application inside of a Servlet container, or with an embedded server like Jetty or Undertow. Either way, each requests ties up a thread and is thus wasting resources when it waits for I/O operations.

Switching over to the Convert-To-Vert.x-Web branch, we can see that the Application class has changed a little. We now have some new @Bean annotated methods for injecting the Vertx instance itself, as well as an instance of ObjectMapper (part of the Jackson JSON library). We have also replaced the CustomerEnpoints class with a new CustomerVerticle. Pretty much everything else is the same.

The CustomerVerticle class is annotated with @Component, which means that Spring will instantiate that class on startup. It also has it’s start method annotated with @PostConstruct so that the Verticle is launched on startup. Looking at the actual content of the code, we see our first bits of Vert.x code: Router.

The Router class is part of the vertx-web library and allows us to use a fluent API to define HTTP URLs, methods, and header filters for our request handling. Adding the BodyHandler instance to the default route allows a POST/PUT body to be processed and converted to a JSON object which Vert.x can then process as part of the RoutingContext. The order of routes in Vert.x CAN be significant. If you define a route which has some sort of glob matching (* or regex), it can swallow requests for routes defined after it unless you implement chaining. Our example shows 3 routes initially.


    @PostConstruct
    public void start() throws Exception {
        Router router = Router.router(vertx);
        router.route().handler(BodyHandler.create());
        router.get("/v1/customer/:id")
                .produces("application/json")
                .blockingHandler(this::getCustomerById);
        router.put("/v1/customer")
                .consumes("application/json")
                .produces("application/json")
                .blockingHandler(this::addCustomer);
        router.get("/v1/customer")
                .produces("application/json")
                .blockingHandler(this::getAllCustomers);
        vertx.createHttpServer().requestHandler(router::accept).listen(8080);
    }

Notice that the HTTP method is defined, the “Accept” header is defined (via consumes), and the “Content-Type” header is defined (via produces). We also see that we are passing the handling of the request off via a call to the blockingHandler method. A blocking handler for a Vert.x route accepts a RoutingContext object as it’s only parameter. The RoutingContext holds the Vert.x Request object, Response object, and any parameters/POST body data (like “:id”). You’ll also see that I used method references rather than lambdas to insert the logic into the blockingHandler (I find it more readable). Each handler for the 3 request routes is defined in a separate method further down in the class. These methods basically just call the methods on the DAO, serialize or deserialize as needed, set some response headers, and end() the request by sending a response. Overall, pretty simple and straightforward.


    private void addCustomer(RoutingContext rc) {
        try {
            String body = rc.getBodyAsString();
            Customer customer = mapper.readValue(body, Customer.class);
            Customer saved = dao.save(customer);
            if (saved!=null) {
                rc.response().setStatusMessage("Accepted").setStatusCode(202).end(mapper.writeValueAsString(saved));
            } else {
                rc.response().setStatusMessage("Bad Request").setStatusCode(400).end("Bad Request");
            }
        } catch (IOException e) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", e);
        }
    }

    private void getCustomerById(RoutingContext rc) {
        log.info("Request for single customer");
        Long id = Long.parseLong(rc.request().getParam("id"));
        try {
            Customer customer = dao.findOne(id);
            if (customer==null) {
                rc.response().setStatusMessage("Not Found").setStatusCode(404).end("Not Found");
            } else {
                rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(dao.findOne(id)));
            }
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

    private void getAllCustomers(RoutingContext rc) {
        log.info("Request for all customers");
        List customers = StreamSupport.stream(dao.findAll().spliterator(), false).collect(Collectors.toList());
        try {
            rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(customers));
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

“But this is more code and messier than my Spring annotations and classes”, you might say. That CAN be true, but it really depends on how you implement the code. This is meant to be an introductory example, so I left the code very simple and easy to follow. I COULD use an annotation library for Vert.x to implement the endpoints in a manner similar to JAX-RS. In addition, we have gained a massive scalability improvement. Under the hood, Vert.x Web uses Netty for low-level asynchronous I/O operations, thus providing us the ability to handle MANY more concurrent requests (limited by the size of the database connection pool).

We’ve already made some improvement to the scalability and concurrency of this application by using the Vert.x Web library, but we can improve things a little more by implementing the Vert.x EventBus. By separating the database operations into Worker Verticles instead of using blockingHandler, we can handle request processing more efficiently. This is show in the Convert-To-Worker-Verticles branch. The application class has remained the same, but we have changed the CustomerEndpoints class and added a new class called CustomerWorker. In addition, we added a new library called Spring Vert.x Extension which provides Spring Dependency Injections support to Vert.x Verticles. Start off by looking at the new CustomerEndpoints class.


    @PostConstruct
    public void start() throws Exception {
        log.info("Successfully create CustomerVerticle");
        DeploymentOptions deployOpts = new DeploymentOptions().setWorker(true).setMultiThreaded(true).setInstances(4);
        vertx.deployVerticle("java-spring:com.zanclus.verticles.CustomerWorker", deployOpts, res -> {
            if (res.succeeded()) {
                Router router = Router.router(vertx);
                router.route().handler(BodyHandler.create());
                final DeliveryOptions opts = new DeliveryOptions()
                        .setSendTimeout(2000);
                router.get("/v1/customer/:id")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getCustomer")
                                    .addHeader("id", rc.request().getParam("id"));
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                router.put("/v1/customer")
                        .consumes("application/json")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "addCustomer");
                            vertx.eventBus().send("com.zanclus.customer", rc.getBodyAsJson(), opts, reply -> handleReply(reply, rc));
                        });
                router.get("/v1/customer")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getAllCustomers");
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                vertx.createHttpServer().requestHandler(router::accept).listen(8080);
            } else {
                log.error("Failed to deploy worker verticles.", res.cause());
            }
        });
    }

The routes are the same, but the implementation code is not. Instead of using calls to blockingHandler, we have now implemented proper async handlers which send out events on the event bus. None of the database processing is happening in this Verticle anymore. We have moved the database processing to a Worker Verticle which has multiple instances to handle multiple requests in parallel in a thread-safe manner. We are also registering a callback for when those events are replied to so that we can send the appropriate response to the client making the request. Now, in the CustomerWorker Verticle we have implemented the database logic and error handling.

@Override
public void start() throws Exception {
    vertx.eventBus().consumer("com.zanclus.customer").handler(this::handleDatabaseRequest);
}

public void handleDatabaseRequest(Message<Object> msg) {
    String method = msg.headers().get("method");

    DeliveryOptions opts = new DeliveryOptions();
    try {
        String retVal;
        switch (method) {
            case "getAllCustomers":
                retVal = mapper.writeValueAsString(dao.findAll());
                msg.reply(retVal, opts);
                break;
            case "getCustomer":
                Long id = Long.parseLong(msg.headers().get("id"));
                retVal = mapper.writeValueAsString(dao.findOne(id));
                msg.reply(retVal);
                break;
            case "addCustomer":
                retVal = mapper.writeValueAsString(
                                    dao.save(
                                            mapper.readValue(
                                                    ((JsonObject)msg.body()).encode(), Customer.class)));
                msg.reply(retVal);
                break;
            default:
                log.error("Invalid method '" + method + "'");
                opts.addHeader("error", "Invalid method '" + method + "'");
                msg.fail(1, "Invalid method");
        }
    } catch (IOException | NullPointerException e) {
        log.error("Problem parsing JSON data.", e);
        msg.fail(2, e.getLocalizedMessage());
    }
}

The CustomerWorker worker verticles register a consumer for messages on the event bus. The string which represents the address on the event bus is arbitrary, but it is recommended to use a reverse-tld style naming structure so that it is simple to ensure that the addresses are unique (“com.zanclus.customer”). Whenever a new message is sent to that address, it will be delivered to one, and only one, of the worker verticles. The worker verticle then calls handleDatabaseRequest to do the database work, JSON serialization, and error handling.

There you have it. You’ve seen that Vert.x can be integrated into your legacy applications to improve concurrency and efficiency without having to rewrite the entire application. We could have done something similar with an existing Google Guice or JavaEE CDI application. All of the business logic could remain relatively untouched while we tried in Vert.x to add reactive capabilities. The next steps are up to you. Some ideas for where to go next include Clustering, WebSockets, and VertxRx for ReactiveX sugar.

Writing BDD tests with Cucumber JVM

Cucumber JVM as an excellent tool to write your BDD tests.In this article I would like to give an introduction to BDD with Cucumber JVM.

Let’s get started…

What is BDD?

problems

In a nutshell, BDD tries to solve the problem of “understanding requirements with examples”
bdd

BDD tools
There are lot of tools available for BDD and interestingly you can find quite a few vegetable names in the list 😉 Cucumber,Spinach, Lettuce, JBehave, Twist etc. Out of these Cucumber is simple and easy to use.

Cucumber-JVM
Cucumber is written in Ruby and Cucumber JVM is an implementation of cucumber for the popular JVM languages like Java, Scala, Groovy, Clojure etc

Cucumber Stack
stack
We write features and scenarios in a “Ubiquitous” Language and then implement them with the step definitions and support code.

Feature file and Gherkin
You first begin by writing a .feature file.A feature file conventionally starts with the Feature keyword followed by Scenario. Each scenario consists of multiple steps. Cucumber uses Gherkin for this. Gherkin is a Business Readable, Domain Specific Language that lets you describe software’s behavior without detailing how that behavior is implemented.
Example:

Feature: Placing bets       
 Scenario: Place a bet with cash balance         
 Given I have an account with cash balance of 100        
 When I place a bet of 10 on "SB_PRE_MATCH"      
 Then the bet should be placed successfully      
 And the remaining balance in my account should be 90

As you can see the feature file is more like a spoken language with gherkin keywords like Feature, Scenario, Given,When, Then,And ,But, #(for comments).

Step Definitions
Once you have finalized the feature file with different scenarios, the next stage is to give life to the scenarios by writing your step definitions. Cucumber uses regular expression to map the steps with the actual step definitions. Step definitions can be written in the JVM language of your choice. The keywords are ignored while mapping the step definitions.
So in reference to the above example feature we will have to write step definition for all the four steps. Use the IDE plugin to generate the stub for you.

import cucumber.api.java.en.And;        
import cucumber.api.java.en.Given;       
import cucumber.api.java.en.Then;        
import cucumber.api.java.en.When;        
public class PlaceBetStepDefs {      
 @Given("^I have an account with cash balance of (\\d+) $")      
 public void accountWithBalance(int balance) throws Throwable {      
 // Write code here that turns the phrase above into concrete actions        
 //throw new PendingException();         
 }       
 @When("^I place a bet of (\\d+) on \"(.*?)\"$")         
 public void placeBet(int stake, String product) throws Throwable {      
 // Write code here that turns the phrase above into concrete actions        
 // throw new PendingException();        
 }       
 @Then("^the bet should be placed successfully$")        
 public void theBetShouldBePlacedSuccessfully() throws Throwable {       
 // Write code here that turns the phrase above into concrete actions        
 //throw new PendingException();         
 }       
 @And("^the remaining balance in my account should be (\\d+)$")      
 public void assertRemainingBalance(int remaining) throws Throwable {        
 // Write code here that turns the phrase above into concrete actions        
 //throw new PendingException();         
 }       
}

Support Code
The next step is to back your step definitions with support code. You can for example do a REST call to execute the step or do a database call or use a web driver like selenium . It is entirely up to the implementation. Once you get the response you can assert it with the results you are expecting or map it to your domain objects.
For example you can you selenium web driver to simulate logging into a site

protected WebDriver driver;         
@Before("@startbrowser")         
public void setup() {        
 System.setProperty("webdriver.chrome.driver", "C:\\devel\\projects\\cucumberworkshop\\chromedriver.exe");      
 driver = new ChromeDriver();        
}        
@Given("^I open google$")        
public void I_open_google() throws Throwable {       
 driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);         
 driver.get("https://www.google.co.uk");         
}

Expressive Scenarios
Cucumber provides more options to organize your scenarios better.

  • Background– use this to define steps which are common to all scenarios
  • Data Tables– You can write the input data in table format
  • Scenario Outline-placeholder for your scenario which can be executed for a set of data called Example.
  • Tags and Sub Folders to organize your features-Tags are more like sticky notes for documentation.

Dependency Injection
More often than not you might have to pass the information created in one step to another. For example you create a domain object in your first step and then you need to use it in your second step. The clean way to achieve this is through Dependency Injection . Cucumber provides modules for the main DI containers like Spring, Guice, Pico etc.

Executing Cucumber
It is very easy to run Cucumber on IntelliJ IDE . It can be also integrated with your build system. You can also control the tests you want to run with different options.

Reporting Options
There are lot of plugins available for reporting . For example you could use the Master Thought plugin for the reports.

References
The Cucumber for Java book– This is an excellent book and this is all you need to get you started
Documentation
GitHub link
That’s all folks. Hope you liked it. Have a good Christmas! Enjoy.