JVM Advent

The JVM Programming Advent Calendar

You need more than containers. A short history of the mess we’re in.

Lean microservice infrastructures continue to replace classic 3-tier architectures in ​​enterprise software. Pushing enterprise developers who lived in the fully integrated world of application servers towards dealing with new methodologies and technologies in a cloud-native world.

As a matter of fact, distributed architectures differ fundamentally from known, monolithic applications. And the complexity of the execution environment leaves developers with new challenges that need different tools to master. Sometimes cloud-native is just too unspecific, hence the definition I use more commonly is “Kubernetes-native” development which extends beyond just microservices into running workloads on Kubernetes no matter the underlying system architecture.

Enterprise development has always been one of the most exciting fields in software development. You’ll find some of the largest systems in the world in this category. For almost a century, the technical implementation was closely interwoven with the Jakarta EE specification.

With the groundbreaking first step on the way away from centralized host systems, developers have been grappling with application servers and deployment processes.. The evolution of and within the enterprise Java standard was increasingly slowing down and many new questions remained technically unanswered. Compared to the almost limitless availability of resources in cloud-based infrastructures and drastically changing business processes, the heavyweight application servers no longer had much to offer anymore. Heavily distributed (micro-) services gradually replace the classic three-tier architectures.

Ideally, application parts can now also be scaled independently and as required. And no longer at the level of the application or application server, but through the underlying infrastructure with built-in orchestration. This is made possible not only by new infrastructure capabilities but also by a changed application design. This is commonly referred to as 12-factor-applications. The name describes the twelve factors that must be met in order to create modern, scalable, and maintainable applications for highly scalable execution in cloud environments. In the past few years, however, the term “cloud-native” has prevailed. Both terms describe application designs that are ideally tailored to distributed and highly scalable applications and infrastructures.

Welcome to the world of containers and Kubernetes

What can easily be described in just a few sentences, however, requires a lot more. The new infrastructure concepts have a particularly strong influence on application design or the migration of existing applications. If you want to know more about this, I recommend that you download the recently released book by Natale Vinto and me “Modernizing Enterprise Java”.

Many languages ​​and technologies can be used to create such a single service. The introduction of containers has removed the obligation to use a single given language across all services in a system because they do not only ship the application code but also the necessary libraries and dependencies.

Containers can be thought of as small virtual machines. The most important difference is that they do not contain an operating system, but run in user space on the operating system kernel. So it is more or less a specific form of virtualization on the operating system side. This means that several containers can be operated on one host. Images are the basic building blocks, containers are constructed using images. The latter can be provided with libraries or application parts and used as a template for other images. The images are structured hierarchically in levels. Every change in an image is added as a new layer.

You can start a container as often as you like on a host. The containers are unchangeable or better: immutable. This means that you can, for example, access a running container with ssh and make changes to the file system there, but these are forgotten when the container is restarted. Each time a function is called in a container, the same data should be returned with the same input. This guarantees automatic scaling and more. You’ll read more on that later.

Steps to the first easy container

A container and its levels are described in a simple manifest. Historically this is known as “Dockerfile”. The non-proprietary version of the Open Container Initiative (OCI) is called an OCI format. The OCI format is a specification for container images based on the Docker Image Manifest Version 2, Schema 2 format. Images are held in so-called registries, which are private or public. Developers can think of it like Maven Central versus .m2 / in the local filesystem. Individual image layers are also cached and accelerate the creation process of the final image. Of course, images also have version numbers and names. If you stumble across “latest”, it always means the current image. Names are called “tags” in images.

After building an application, the next logical step is to create a Dockerfile.

FROM adoptopenjdk/openjdk11:latest
COPY my-0.0.1-SNAPSHOT.jar /my.jar
CMD [“/ usr / bin / java”, “-jar”, “/ my.jar”]

This example creates a container with three layers. Its sole purpose is to run the Java application. This requires three steps: the base image, which in this case is provided by the AdoptOpenJDK project, then the copy of the application in the image, and finally running the application.

Building a container from a text file and a Java archive (jar) requires tools. We’re all familiar with Docker and know how to build a container from these three lines. But there’s more. Imagine, as a Java developer all you want to do is to actually write your code and not care about containers at all. I have a solution for you. Try Quarkus. If you haven’t yet, install the Quarkus CLI. There’s great documentation on how to install it locally. Now try:

$quarkus create demo-app

You’ll find a complete getting started example of a Quarkus application in the folder “demo-app”. Navigate there and execute:

$quarkus extension add container-image-docker

And quarkus will add everything necessary to build your container images. You’ll find Dockerfiles in src/main/docker and also the added extensions in the pom.xml. One last step to your first Quarkus application container:

$quarkus build -Dquarkus.container-image.build=true

Given that you have Docker installed on your computer and everything is up and running, you can now list your newly created container with docker images. No mingling around with Dockerfiles. No container commands. Sigh. That was easy. Now we have one container. And we’ll leave it here. Given the possible length of the article, we’ll cut everything else almost criminally short. Remember, there are plenty of ways to build images, and using Quarkus gives you the power to refocus on application development and forget about infrastructure details.

But a distributed architecture with services in the context of a container landscape requires more than just one container per service. The containers must also be started and operated in a coordinated manner. That’s where Kubernetes comes in.

A container ship

Cloud-native applications are built from many containers. If you imagine having to start each of these individually via the command line, you begin to guess the administrative effort. Moving all the containers together, like on a ship, requires orchestration. And that’s what Kubernetes provides. The ship is not even a particularly good comparison. Actually, the orchestration corresponds much more to the whole sea with many ships including communication and navigation.

What is exciting at this point for developers is that at first glance it is not that interesting what exactly is put, how, and where. Of course, there are boundary conditions for the containers that are expected to exist, for example, a database instance or a Kafka broker. Where exactly something is located on the Kubernetes cluster doesn’t really matter, unless you want to fully understand the concept. Back to the nautical picture, one can say that a container ship is called a pod in Kubernetes. One or more containers can be operated in one pod. They then also share a network. Using a container on a pod requires a deployment. And guess what, Quarkus can do that for you, too. Let’s add the Kubernetes extension to it:

$quarkus extension add kubernetes

By adding the extension, Quarkus enables the generation of Kubernetes manifests each time it performs a build while also enabling the build of a container image using Docker. The generated manifest can be used to deploy the service to Kubernetes or configure it further. All details can be found in the Quarkus documentation.

Compared to classic software development in the enterprise environment, it all feels like a new world. With a lot of abstraction, it might be possible to compare containers and Java EE .ear files, since it is actually a packaging and distribution format. And with even more goodwill, a comparison of Kubernetes with the well-known Jakarta EE application servers works. It actually operates the application parts on the distributed physical hardware.

However, this comparison is only permissible at first glance, since both containers and Kubernetes only care comparatively little about the requirements of the application. They do not offer any support for transactions or other programming interfaces, only the configuration and abstraction of the hardware. A classic Jakarta EE application server cluster can of course be operated on Kubernetes without major problems, but the added value of a lightweight and quickly scalable platform isn’t the main goal anymore.

But we can make things easier than they are today. And while we only briefly scratched how you can create containers and use them with Quarkus there is a lot more developer productivity and joy buried in it. Don’t forget to look at the developer user interface of Quarkus. Accessing /q/dev on your local machine will get you directly to it. It allows you to quickly visualize all the extensions currently loaded, see their status and go directly to their documentation. Quarkus also supports the automatic provisioning of unconfigured services in development and test mode. This is called the “Dev Services”. From a developer’s perspective, this means that if you include an extension and don’t configure it then Quarkus will automatically start the relevant service (usually using Testcontainers behind the scenes) and wire up your application to use this service. This can be pretty much everything from databases to message brokers. You’ve already opened door five with a great article about continuous testing with Quarkus.

But there’s more that we need to get back under control in this brave new world of containers. And it has a broad range of topics that needs to be looked at. Developer usability and self-service can be seen as the basics. They barely depend on developing containers but more on the target platform. Following my Twitter posts regularly, you can imagine what I would suggest looking at. (Hint: OpenShift .. go, try it out yourself for free!). But also the platform itself needs services and support for developers. The latest challenge to be solved is access to Java Flight Recording data from containers. While this concept was integrated into the OpenJDK a couple of years ago already, the frontend was a rich client intended to support the development of single JVM applications. The newly created Cryostat project aims at changing it.  Cryostat is a containerized JVM that acts as a “sidecar” alongside other OpenJDK applications and connects JFR data in the cloud with end-users at their workstations. In fact, it acts as a hub for retrieving and storing flight recordings from containerized JVMs, so users can access these recordings over HTTP/HTTPS. It can also funnel JFR events into metrics dashboards like Grafana for automated analysis.
Another important topic is service binding. Connecting applications to the services that support them is referred to as binding. Configuration and maintenance of this binding together of applications and their services is a mostly manual and sometimes inefficient process. The Service Binding Operator remedies this by managing the binding process. Another puzzle piece to bring back the powerful pieces that developers used to have in had to create stable, performant, and reliable applications. The future of the new standard platform is bright and I am personally convinced that we will see the platform and its ecosystem deliver more and more productivity features for us developers going forward.

More links:

Author: Markus Eisele

Markus Eisele leads the developer adoption team for EMEA at Red Hat. He has been working with Java EE servers from different vendors for more than 14 years, and gives presentations on his favorite topics at leading international Java conferences. He is a Java Champion, former Java EE Expert Group member, and founder of JavaLand. He is excited to educate developers about how microservices architectures can integrate and complement existing platforms.

He is also the author of “Modern Java EE Design Patterns” and “Developing Reactive Microservices” by O’Reilly. You can follow more frequent updates on Twitter @myfear.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2022 JVM Advent | Powered by Jetbrains LogoJetBrains & steinhauer.software Logosteinhauer.software

Theme by Anders Norén

%d bloggers like this: