JVM Advent

The JVM Programming Advent Calendar

Composable Microservices with Http4k and Quarkus.io

Monolith or Microservices? With Http4k and Quarkus.io we can have both!

Stonehenge, Architecture, History, Monolith

A wise man once said that if you don’t know how to build a modular monolith, putting the network in the middle would hardly help. In other words, designing microservices is not easy.

On the other side, once well designed, microservices are very useful for achieving “Business-Agility”, allowing the software to be quickly changed as the market conditions change and keeping the cost of change curve flat.

Looking at the blogosphere, we are now in the “trough of disillusion” of the hype cycle of microservices architecture. Curiously enough, we are also near the “peak of inflated expectation” for the serverless architecture.

Hype Cycle Research MethodologyMy personal experience is that there are many advantages to split our application into small, atomic, independently (and continuously) deployed pieces. But there are also some costs.

In this post I want to propose a strategy to reduce some of these costs, in particular:

  • the cloud bill for underused services —it is a good practice to allocate at least 3 instances for each microservice to ensure 24/7 availability and green-blue deployment. If you have a system with 30 different microservices, it means a minimum of 90 cloud nodes, which may cost considerably more than you would pay for deploying a few instances of a monolith with the same capabilities.
  • the cost of increased time needed for developing and testing — having a small deployable unit simplifies the development and the debug of issues related to the specific logic, but at the same time, it makes it harder to work on issues related to communication between modules.
  • the performance hit caused by network hops —in-process communications are always faster than inter-process and inter-machine communications. Having to call an external service to calculate a price, for example, will add some milliseconds to the total time for the response. This may be or may not be a problem, depending on the application’s expected latency.

Before looking at the solution, I’d like to explain the way we design our microservices, keeping in mind that the goal is to optimize for the overall business agility.

Monorepo and Hexagonal Architecture

Let’s look at a diagram representing a typical system based on microservices.

In this diagram, each white circle represents a single microservice, the lines are the calls between them, and the large grouping represents the different sub-domain, or more precisely the different bounded contexts of our system.

Note also that some services are stateful and require their own database (the small cylinders), and others are stateless and don’t require any form of persistence.

How can we organize the code? A first requirement is that we want to keep the business domain related code separated by the technical protocols. To archive this, we implement the “hexagonal architecture” — also called “ports and adapters” — keeping all the domain logic inside a “hub” and using “spokes” functions to serve as adapters.

Now if service A needs some data from B, this means that A and B are sharing the same sub-domain. They will also share an adapter that will allow the two to communicate with each other. The alternative of sharing is to have symmetrical code in both services. This would just hide the dependency but it will not eliminate it.

For this reason, we prefer to keep all dependencies explicit and using a single git repository for the whole project. Each sub-domain, each adapter, and each individual service will stay in separate modules.

The final ingredient in our (not so) secret microservices sauce is the functionally composable approach to the inter-service Api. We are using http4k, which allow us to express both Http server and Http client as simple functions of type (Request) -> Response.

The benefits of this approach are manifold. Here I will show how we can use it to compose our services in a flexible way.

The Example Project

Let’s see an example to put this idea into practice. First the mandatory git repository: https://github.com/uberto/h4kcluster-quarkus

In case I will change the code in the future, you can always refer to this article version using the tag java_advent_2020.

I will now proceed to explain the project structure and code, if not interested in the details and you want to read about how to compose services, you can skip directly to the section called “All Together Now!”. (I couldn’t make a link to it)

For this example, we have a Kotlin project split into six modules. We can see the modules from the gradle.settings file:

include 'domain'
include 'adapter'
include 'countwords-app'
include 'sumnumbers-app'
include 'ui-app'
include 'launcher'

Here I call application a logical unit of calculation from external inputs, in other words, the equivalent of the main function in Java (and Kotlin).

With microservice, I mean the unit of deployment. Which may be, or may not be the same thing, as we will see.

For this example, we have three different applications: countwords, sumnumbers, and ui.

We also have a domain module with the business logic and an adapter module where we put the necessary methods for using Http, Json, database, and so on.

In a real project, we will have several domain modules — one for each bounded context — and several adapters — one for each Api among microservices.

We will look at the launcher module later.

The goal of the system is to count the words in a text. The UiApp will ask the user to insert some text, and then send each line separately to the CountWordsApp, which will return the number of words, and finally use the SumNumbersApp to get the total number of words.

The three applications are contained in three separate microservices (the green border) each with a Hub (the purple circle) and communicating using adapters (red dots).

The Domain Module

We put all the domain logic inside our “hubs”. The hub classes are in the domain module. In a more complex project, we will have one module for each bounded context, which in turn will be used by one or more applications, but here we have three very simple hubs for our three applications:

CountWordHub is very simple:

class CountWordHub {
  fun countWords(text: String): Int = text

SumNumbersHub is even simpler:

class SumNumbersHub {
  fun sum(a: String, b: String): Int = (a.toInt() + b.toInt())

UiHub receives the adapters for communicating with the other hubs and it will use them to count the words of the text:

class UiHub(
      val wordCounter: (String) -> Int,
      val sumNumbers: (Int, Int) -> Int
) {

  fun countWords(text: String): Int {
    val lines = text.split("\n")

    return lines.map(wordCounter).reduce(sumNumbers)


The Adapter Module

In our tiny system, the UiApp only need to handle two calls, one for getting the Html page and one for calculating the number of words.

The actual logic is inside the Hub, which is passed in the constructor parameter of the Http handler — which is nothing but a function that translates a Request into a Response:

class UiHandler(hub: UiHub) : HttpHandler {

  val routes = routes(

     "/" bind GET to { _ ->

     "/submit" bind POST to { req ->
          val text = req.form("words").orEmpty()
          val wordNo = hub.countWords(text)
  override fun invoke(req: Request): Response = routes(req)

When we start the UiApp (using the main function from our Ide), we should be able to see the landing page with a browser:

The other two applications have similar Http handlers.


class CountWordHandler(val hub: CountWordHub) : HttpHandler {

  val routes = routes(
        "/count" bind GET to { req: Request ->
          val resp = hub.countWords(req.bodyString()).toString()
        "/" bind GET to { _: Request ->
          Response(OK).body("<html><body><h1>This is CountWords</h1></body></html>")

  override fun invoke(request: Request): Response = routes(request)



class SumNumberHandler(val hub: SumNumbersHub) : HttpHandler {

  val routes = routes(
        "/sum/{a}/{b}" bind GET to { req: Request ->
          val a = req.path("a")
          val b = req.path("b")
          if (a != null && b != null) {
            val tot = hub.sum(a, b)
          } else
            Response(BAD_REQUEST).body("wrong request: ${req.uri}")
        "/" bind GET to { _: Request ->
          Response(OK).body("<html><body><h1>This is SumNumbers</h1></body></html>")

  override fun invoke(request: Request): Response = routes(request)


On top of the handlers, in the adapter module, there are also the clients. These are the classes with the logic to communicate between applications.

Let’s see WordCounterClient:

class WordCounterClient(val handler: HttpHandler) : (String) -> Int {
  override fun invoke(text: String): Int {

    val req = Request(Method.GET, WordCounterRoutes.count).body(text)

    val resp = handler(req)

    if (resp.status == Status.OK)
      return resp.bodyString().toInt()
      return -1

A nice feature of Kotlin is that a class can inherit from a function, (see my post about it). The advantage of using a class instead of a simple function is that we can have private fields in the class, that would work as effective functional dependency injection, in a very natural way.

WordCounterClient invokable type is (String) -> Int so we can pass it directly to the UiHub that is expecting a function with that signature.

SumNumbersClientis very similar:

class SumNumbersClient(val handler: HttpHandler) : (Int, Int) -> Int {
  override fun invoke(a: Int, b: Int): Int {

    val req = Request(Method.GET, SumNumbersRoutes.sum(a, b))

    val resp = handler(req)

    if (resp.status == Status.OK)
      return resp.bodyString().toInt()
      return -1 //in case of errors

Starting Apps Independently

The applications need to know about each other. Rather than hardcoding all the URLs, we define a general interface for the applications:

interface ApplicationId {
  val hostname: String

data class Application(
      val id: ApplicationId,
      val description: String,
      val handler: HttpHandler

And each application has defined its own id:

object UiId: ApplicationId { override val hostname = "ui" }

And there is also an interface to retrieve an Http client to be able to communicate to a specific application.

interface ServiceDiscovery {
    fun provideHttpClient(id: ApplicationId): HttpHandler

Now we can create each application using a creator object that will create the Http client, used by the client adapters which are used by the hub!

It’s a bit intimidating at first, but this pattern makes it easy to build complex applications without risking broken communications.

object UiCreator: (ServiceDiscovery) -> Application {

  override fun invoke(sd: ServiceDiscovery): Application {

    val wcClient = WordCounterClient(
    val snClient = SumNumbersClient(
    val hub = UiHub(wcClient, snClient)

    return Application(UiId, "front-end", UiHandler(hub))

interface ServiceDiscovery {
    fun provideHttpClient(id: ApplicationId): HttpHandler

Finally, we need to define the rules to get the application URLs. The code in the example works when running on localhost, but it is straightforward to adapt it to work also in the cloud environment. Maybe a future post will show how to use it in k8s, for example.

For localhost it’s all pretty simple:

// read port config from cloud configuration
object DeployableServiceDiscover : ServiceDiscovery {
  private val ports: Map<ApplicationId, Int> = 
              SumNumbersId to 8081,
              WordCounterId to 8082,
              UiId to 8083

  override fun provideHttpClient(id: ApplicationId): HttpHandler {
    val uri = calculateUri(id)
    return ClientFilters.SetBaseUriFrom(uri)
          .also { println("Connected ${id.hostname} on $uri") }

// replace localhost with your cloud domain...
  private fun calculateUri(id: ApplicationId): Uri =
        ports[id]?.let {
        } ?: error("Application not registered: $id")

  fun startServer(creator: (ServiceDiscovery) -> Application) {
    val app = creator(this)
    val port = ports[app.id] ?: 
                    error("Application not registered: ${app.id}")
          .also { println("Started ${app.description} on $port") }


Each application main function consists only of the call to start the server:

fun main() {

Once it started, we can test the word counter directly from the command line with a simple text:

curl -i -X GET -d 'The quick brown fox jumps over the lazy dog' \ http://localhost:8082/count

And the result is 9, the correct number of words!

HTTP/1.1 200 OK


All Together Now!

So far so good. But what about the discussion about composing the services?

Since we are wiring our applications together using the ServiceDiscovery interface, we are not limited to deploy each application independently.

The three applications are now contained in a single deployable unit (the green border) and the Hub (the purple circles) are communicating directly in memory.

We will look now at how we can combine our three applications in a single deployable. This is particularly useful for testing and debugging locally the whole system.

Generally speaking, the strategy used here can be extended and adapted to other configurations, for example reducing a system from 30 microservices to 5 deployable units, maintaining the high modularity approach, and the flexibility to change how we aggregate the applications with just a few lines of code.

But now it’s time to start coding. We create a new ServiceDiscovery that can keep all applications in-process:

class InProcessServiceDiscovery : ServiceDiscovery {
    private val applications: AtomicReference<Map<ApplicationId, Application>> = AtomicReference(emptyMap())

    override fun provideHttpClient(id: ApplicationId): HttpHandler =
        applications.get()[id]?.handler ?:
 { Response(Status.BAD_GATEWAY).body("application unknown $id") }
   fun findByHostname(hostname: String): Application? =
            .firstOrNull { it.id.hostname == hostname }
   fun register(
         creator: (ServiceDiscovery) -> Application) {
    applications.getAndUpdate { it + creator(this).toMapEntry() }


This new service discovery instead of reading the URLs for applications from some external configuration will keep their details in an internal map, called applications.

The Launcher Module

We now need a place where to put a new main method that will start listening on a port, serving all the applications. The launcher module in the git repo is there for this reason.

But now we have a problem: if we want all the applications to communicate using in-memory sockets instead of real ones, which ports should we use?

My colleague and good friend Asad Manji came with a very good solution: let’s use the hostname instead!

So our new HttpHandler for local applications, will forward the request to the right application using the first part of the domain.

So calls to ui.localhost will go to the UiApp and so on. This is the full code:

class LocalAppsHandler : (Request) -> Response {

  val serviceDiscovery = InProcessServiceDiscovery()

  init {
    ).forEach { serviceDiscovery.register(it) }

  override fun invoke(request: Request): Response {

    val hostname = request.header("HOST")?.substringBefore('.').orEmpty()

    val app = serviceDiscovery.findByHostname(hostname)

    println("incoming request for ${app?.description ?: "launcher"}")

    return app?.run { handler(request) }
          ?: Response(Status.OK).body("<html><body><h1>Hello this is the launcher!</h1> <p>$hostname</p><p>$request</p></body></html>")

This is just a proof-of-concept for testing ideas, you can play and adapt to run any aggregation of applications for your use.

Is there something else that we can do? Of course there is! We can integrate with Quarkus.io to take advantage of its amazing development mode and native builds.

Integrating with Quarkus.io

The easiest way to integrate Quarkus with Http4k is to decorate a HttpHandlerServlet (from Http4k) with the WebServlet annotation (from javax.servlet):

open class RestServlet : Servlet by HttpHandlerServlet(

And that’s all!
(To be completely honest it took me a while to get the right configuration for the gradle file but you can just copy them from the example)

We can now start our launcher using quarkus dev mode from the command line with:
./gradlew launcher:quarkusDev
and this is the output:

./gradlew launcher:quarkusDev
Starting a Gradle Daemon, 1 incompatible Daemon could not be reused, use --status for details

> Task :launcher:quarkusGenerateCode
preparing quarkus application

> Task :launcher:quarkusDev
Listening for transport dt_socket at address: 5005
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2020-12-10 08:16:43,264 INFO  [io.quarkus] (Quarkus Main Thread) launcher 1.0 on JVM (powered by Quarkus 1.10.2.Final) started in 0.852s. Listening on: http://localhost:8080
2020-12-10 08:16:43,276 INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2020-12-10 08:16:43,276 INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kotlin, servlet]
incoming request for front-end

Now we can point our browser on ui.localhost:8080 and we have a fully working UiApp, calling all the other applications in memory:

And it is fully working:

Not only that, but we can change anything in our code and just by refreshing the browser we will see the results almost in real-time.

For example changing some line of code of the UiHandler force a stop, a recompilation, and the hot replace. All in less than 400 milliseconds. The same is true for any changes to the other applications.

Working with the full system alive in this way is like working with a Python or Javascript web server, but with all the advantages of JVM and Kotlin type safety:

incoming request for front-end
2020-12-14 15:13:19,383 INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (vert.x-worker-thread-7) Changed source files detected, recompiling [/home/ubertobarbini/svi/kotlin/h4kcluster-quarkus/ui-app/src/main/kotlin/com/ubertob/h4kcluster/ui/UiHandler.kt]
2020-12-14 15:13:19,585 INFO [io.quarkus] (Quarkus Main Thread) launcher stopped in 0.001s
__ ____ __ _____ ___ __ ____ ______ 
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ 
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 
2020-12-14 15:13:19,695 INFO [io.quarkus] (Quarkus Main Thread) launcher 1.0 on JVM (powered by Quarkus 1.10.2.Final) started in 0.109s. Listening on: http://localhost:8080
2020-12-14 15:13:19,696 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2020-12-14 15:13:19,696 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kotlin, servlet]
2020-12-14 15:13:19,696 INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (vert.x-worker-thread-7) Hot replace total time: 0.326s 
incoming request for front-end
<============-> 97% EXECUTING [3m 56s]
> :launcher:quarkusDev

Is this all? Well, actually there is another present waiting for us…

Once we have configured Quarkus, we got native builds for free!


One of the biggest problems developing microservices using the JVM is the memory model. For a lot of good reasons, when running an application from the JVM, you have lots of memory used by stuff that you don’t actually need.

This is not a problem if you are running a big monolith or on your physical hardware: a few hundreds of megabytes of memory will not impact much a modern machine.

On the contrary… if you are running your very small services on the cloud, the memory you are using will impact heavily your bills.

Switching from 200MB to 20, can dramatically reduce the cost of your live system. With GraalVM we can now compile our Java (or Kotlin) code ahead-of-time. The resulting code will no as performant as the super-optimized code produced by the JVM Just-In-Time compiler, but it will have a much much smaller memory footprint and an incredibly quick startup time.

For this example we passed from a 260MB uber-jar to a 27MB executable, created with just a single command:

./gradlew build -Dquarkus.package.type=native -Dquarkus.native.container-build=true

I personally am very excited by the combined possibilities offered by Http4k and Quarkus.io.


I wanted to write something shorter but the enthusiasm got the better of me. We covered a lot in this post:

  • I presented some architectural patterns we used to create microservice applications.
  • Then we looked at an example project to count words (surprise surprise…)
  • And then we saw how to combine applications together using Http4k unreasonable composability.
  • Finally, we examined how Quarkus.io could help us with developing and building microservices.

I hope this has been useful. If you like it, please follow me (@ramtop) on twitter and medium, so that I will continue to produce similar content.

I want to thank Asad Manji, Ivan Sanchez, David Denton, and Nat Pryce, that helped me with code and discussions for the content of this post.

Author: Uberto Barbini

Uberto is a very passionate and opinionated programmer, he helps big and small organizations in delivering value.
Trainer, public speaker, and blogger. Currently working on a book on Kotlin and Functional Programming. Google Developer Expert.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 JVM Advent | Powered by steinhauer.software Logosteinhauer.software

Theme by Anders Norén