JVM Advent

The JVM Programming Advent Calendar

Production-Grade Kubernetes for Java Developers

What does it mean to be a good Java Cloud Citizen? It’s definitely more than just putting an application in a container and deploying it. It is essential to consider factors such as providing real-time health status through fine-grained metrics to optimize your Java application’s performance and resilience in the cloud, . You’ll also need to ensure fast startup, and avoid excessive resource consumption within the cluster.

Being a good Cloud Citizen also involves streamlining configuration, deployment and upgrade processes. By integrating these tasks seamlessly, the application can facilitate smooth deployments and upgrades. This will lead to more efficiency and ease of management. This article gives a concise and opinionated overview of the Kubernetes basics from a Java developer’s perspective and learn step-by-step how to get your application production ready on Kubernetes.

Startup time and small footprint

Kubernetes is a highly sophisticated orchestration engine for containerized applications. Usually organizations install Kubernetes clusters across several nodes (servers). Kubernetes automatically and dynamically spreads out workloads across these different nodes for optimal usage. Typically you will want to create multiple instances/pods of your application for high availability. When one pod gets killed, stopped, moved to a different node or what have you, your application is still up and running from your user’s perspective. This is called “horizontal scaling”. 

Because pods can stop and start relatively frequently, it is important that the applications inside those pods can start up quickly as well. The longer it takes to start up, the less flexible Kubernetes will be at scheduling its workloads and the less advantage you have of using a cloud solution. Similarly, it is important that your applications have an as small as possible footprint. This allows Kubernetes to schedule applications across nodes more flexibly. For example, if your application’s resources take up 60% of your node’s available capacity, Kubernetes will not be able to schedule another instance on that same node, even though 40% of its capacity goes underused. If however, your application uses 40%, it’s able to spread the load to multiple instances of one or more of your applications. Only 20% of that node would go to waste. Imagine if your workloads would be even smaller. Kubernetes would have a lot more flexibility of scheduling pods, as well as increase the node’s usage and efficiency. 

Java

What does this mean for your Java applications? Java was (unsurprisingly) not originally built for cloud deployments. The typical deployment target for “traditional” Java applications is rather large. They usually have dedicated servers with the goal to keep the applications running as long as possible. If the application needs to scale, it’s usually done by adding more hardware resources to these servers (vertical scaling) instead of creating more instances (horizontal scaling). Because of this, startup time and footprint were not a top priority for Java developers. With Kubernetes however, Java has had to reinvent itself.

There are several initiatives in the Java world to reduce the startup time and footprint of Java applications. There are eg. OpenJDK projects like Leyden and CRaC, or projects like GraalVM Native Image that compile Java applications down to very fast and compact native binaries. There are newer frameworks/stacks like Quarkus, Micronaut and Helidon. These have been conceived with cloud native and kubernetes deployment targets in mind. Even the more traditional Spring (Boot) or JakartaEE have been making a lot of improvements to make applications more Kubernetes friendly.

Quarkus

We’ll focus in this article on Quarkus because it makes working with “Kube-Native” Java quite a bit easier and more performant. Feel free to explore the other stacks and compare and see what makes the most sense for your project.

Quarkus moves as much “heavy lifting”, such as classpath scanning, resolving annotations, etc to the application build time instead of during the application startup. This reduces both the startup time and the amount of memory needed. Actually, this is just the tip of the iceberg, you can read more about Quarkus optimization for container workloads here.

Optimizing your Java application for Kubernetes is the first, but also one of the most important steps to create a production-grade, Kubernetes-native Java application. Let’s now take a look at some tips and tricks to create production-grade cloud native Java workloads.

Containerize your application

To deploy an application to Kubernetes, you will need to package it up as a container image first. A few years ago that meant creating or finding a Dockerfile, adding commands to copy your artifacts and dependencies, and building the container image with a docker build command. While this is still a valid way of building containers, there are now many more ways and tools to create container images, such as Podman, Jib, BuildPacks, Kaniko, Buildah, etc.

While each of these tools have their advantages and challenges, you will likely use a base image which you can customize to your needs. It is important to be very conscious of where this base image comes from. There are a surprising amount of container images out in the wild. While probably not purposefully malignant, these images often contain vulnerabilities that can be exploited relatively easily. To create a production-grade container image, it is thus very important that you start from a verified/certified base container image. Ideally one that comes from a source you trust and that you can expect to maintain the base images you’re using going forward as well. Red Hat’s Universal Base Images (UBI) are an example of base images you can use and redistribute (license free).

Deploying to Kubernetes

Once you have a container image, the next step is for you (or someone else in your organization) to deploy it to Kubernetes. To build and release a production-grade application it makes sense to test your applications in an environment as similar to the production environment as possible. This will help you to get ahead of discrepancies between your local environment and production as soon as possible as well. It is however not a trivial task to learn all the ins and outs of Kubernetes and its ecosystem. Fortunately there are solutions for Java developers to be able to deploy applications to a local or remote Kubernetes instance.

Quarkus for instance makes things painless through the use of a ‘quarkus-kubernetes’ extension. Adding this dependency to your project will generate Kubernetes manifests for you automatically (in an aptly named target/kubernetes/ folder). You can then deploy these manifests by either applying the (yaml or json) file, or calling a quarkus deploy command. Quarkus supports remote debugging on Kubernetes out of the box as well. Even its Dev Mode can work with a Kubernetes deployment.

Alternatively, projects like JKube are worth checking out as well to work with Kubernetes in an easy and straightforward way. 

Is my application actually ready to receive requests?

A production-grade Kubernetes application needs more than just a deployment though. If you deploy a container (in a pod), a Kubernetes Service will by default start sending traffic to it as soon as the container starts. However the application inside the container might still be starting up. Even if you’ve optimized your application to start up super fast, there will still be a gap of (milli)seconds where it is not available. It might for example also be establishing connections to a database or a messaging system, so requests coming are likely to fail during this startup time. Fortunately Kubernetes has a concept of “health probes” that can point to an endpoint in your application where you can advertise whether your application is actually able to receive requests. There are 3 different health probes: 

  • Startup Probe
  • Readiness Probe
  • Liveness Probe

Quarkus leverages the MicroProfile Health spec through the Smallrye implementation. Adding the “smallrye-health” extension will, in combination with the ‘kubernetes’ extension, add the 3 health endpoints to your application’s Kubernetes manifests automatically. You can create custom health endpoints using simple MicroProfile-based annotations. You can also modify the parameters of the health endpoints by adding configuration values to the application.properties file.

Declare your application’s needs and limits

As mentioned before, organizations typically deploy a Kubernetes cluster across several nodes. They each have a certain amount of processing power (CPU) and memory (RAM) available. If you do not specify any limits to your application’s pods, they will by default be able to consume as much of the resources of the node they’re running on. This can become problematic when you have multiple applications running and they start competing for the available resources. When resource starvation starts to occur on a node, the Kubernetes controller will step in and effectively kill pods on this node. If it’s not able to reschedule the killed pods on a different node, these workloads will not be able to start up anymore. This will result in a degraded user experience (at best). 

To avoid these kinds of scenarios, you can leverage the concepts of “requests” and “limits” in Kubernetes. Adding a “request” parameter to your deployment will communicate to the Kubernetes controller that your application needs a minimum amount of resources (memory and/or cpu) to work correctly. This helps the Kubernetes controller to schedule your pod appropriately on one (or more) of its nodes. “limits” on the other hand tell Kubernetes that if your application goes beyond a given amount of resource usage it should kill and restart the pod. This helps avoid situations where your application is unexpectedly starting to use more resources than you anticipated. Eg. due to a memory leak or another unforeseen buildup of resource usage. Instead of the pod taking up more and more resources and eventually potentially bringing down an entire node or cluster, the “blast radius” of a resource issue is now contained to just one instance. 

Adding Limits

Adding limits and resources is therefore likely a good practice. Your kubernetes admin might have defined some default limits and requests already for each pod. It is however also a good idea for the developer to be aware of the (predicted) resource usage of their application and specify the request and limit values they would like for the application. 

For Quarkus, you can specify these values to the generated Kubernetes manifest by adding request and limit values to the application.properties file. eg. 

quarkus.kubernetes.resources.limits.cpu=300m

quarkus.kubernetes.resources.limits.memory=300Mi

Security considerations

No production-ready application and deployment is complete without considering the security implications of such an endeavor. At the minimum you should scan your application’s source code and dependencies for vulnerabilities. An IDE plugin like Dependency Analytics (for VScode or IntelliJ) gives you feedback while you’re developing your code. You should integrate code and container scanning in your CI/CD pipeline as well and fail your pipeline if critical vulnerabilities are found. Think of code scanning tools like SonarQube, or container scanning tools like Clair and/or Trivy. Your Kubernetes admins or security team should also have runtime scanning capabilities installed as well.

Secrets

You should also make sure to keep sensitive data safe. Passwords and other sensitive information are stored in Kubernetes in the form of “Secrets”. Though authenticated users and service accounts have access to these objects in Kubernetes, they are typically encrypted at rest, making them less vulnerable to be exploited. Unless a hacker somehow gets admin access to a cluster, or is able to exploit a container that has viewing privileges to the secrets. 

Accessing and using secrets is again straightforward with Quarkus. Adding the “kubernetes-config” extension gives you the ability to interact with Kubernetes configuration options such as secrets. All you have to do is set the “secrets.enabled” flag to true such as in the following example. After that, specify which secrets you would like to interact with (the ‘postgresql’ secret in this case). Quarkus creates the necessary Kubernetes constructs such as a ServiceAccount, Role and RoleBinding in the background that allow the application to access the secret.

%prod.quarkus.kubernetes-config.secrets.enabled=true

%prod.quarkus.kubernetes-config.secrets=postgresql

To further encrypt secrets, there are tools such as Vault and Sealed Secrets

Observe and measure your application on Kubernetes

Once your application has landed on a Kubernetes instance, you’ll want to keep an eye on how it is behaving and whether your requests and limits are set appropriately. Exposing metrics from your application to an observability stack is a must when you have distributed loads and ephemeral containers that can come and go. From a Java perspective, the OpenTelemetry and MicroMeter projects are good solutions to add observability to your application. With Micrometer for example, you can expose a “metrics” endpoint to your application that a monitoring tool (eg. Prometheus) collects metrics from. This allows you to search through or create graphs and dashboards with (eg. with Grafana). You can also see in detail all the various metrics coming out of the JVM running inside your application (memory used, statistics related to the garbage collector, etc). This in turn will allow you to proactively make modifications to your code, or your deployment manifests. 

Observability is also important to be able to access logs in a centralized place, and trace through requests in case issues are happening. The OpenTelemetry project supports Java. It integrates easily into your code to forward traces and logs to a collector that you can plug in to a tracing tool such as Jaeger.

Automate your deployments

Setting up container builds, kubernetes deployments, configuring your application, adding observability are all important steps. The most important of all is perhaps to automate your application’s configuration and deployment as much as possible. Automation is important to release your application in a smooth and controlled manner that is repeatable. It is also important because it allows you, your team, and those who come after you, to know exactly how to build and deploy the application, and with what kind of configurations. 

You should make sure even your CI/CD tool itself, as well as its pipelines can be automated as well. This allows you to (re)create entire stacks with ease and enable you to create new production-grade applications without much hassle. Tekton for example is a CI/CD solution that can be fully defined as a set of Custom Resources in Kubernetes. It allows you to automate your pipeline creation as well as the creation of Tekton instances. In addition, it integrates with signing tools such as Sigstore. With this you can sign not only your artifacts, but each task that’s part of your pipelines as well. In a new world of Software Supply Chain Attacks, this is another invaluable step on your way to productizing applications.

Gitops

Finally, GitOps tools such as ArgoCD or FluxCD can help you define a desired state of your environment, deployment and configuration, and make sure your Kubernetes environment actually matches this desired state. This helps you to know exactly what your environment and deployments should look like. With GitOps, you can see (in your source repository) who changed something, what they changed, and when it changed. GitOps and adjacent tools such as Argo Rollouts also allow you to roll out applications in advanced ways. With it you can use blue/green or canary rollouts and release in progressive way that minimizes the impact to your users.

This is just the beginning

Productizing Java applications for Kubernetes can seem like a daunting task. With some careful consideration and planning it can make developer’s lives easier and vastly more productive. It can also make a huge difference for your organization’s ability to execute and deliver applications faster, more secure and more robust.  

This article tried to give you a quick and concise overview as well as some pointers to Open Source projects you could use to build and deploy production-grade, Kubernetes-native Java applications. This should get you well on your way to becoming a good Cloud-Native citizen.

Author: Kevin Dubois

Kevin is a software engineer, author and international speaker with a passion for Open Source, Java (Quarkus), and Cloud Native Development & Deployment practices. He currently works as developer advocate at Red Hat where he gets to enjoy working with Open Source projects and improving the developer experience. He previously worked as a (Lead) Software Engineer at a variety of organizations across the world ranging from small startups to large enterprises and even government agencies.

Kevin is actively involved in Open Source communities, contributing to projects such as Quarkus, Knative, Apache Camel, and Podman (Desktop); and as a member of the Belgian CNCF chapter as well as the Belgian Java User Group.

Kevin speaks English, Dutch, French and Italian fluently and is currently based in Belgium, having lived in Italy and the USA as well.

In his free time you can find Kevin somewhere in the wild hiking, gravel biking, snowboarding or packrafting.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 JVM Advent | Powered by steinhauer.software Logosteinhauer.software

Theme by Anders Norén