When December arrives, I always feel the same mix of nostalgia and excitement. The year starts to slow down, calendars fill with end-of-year meetings, and yet this is when the Java community does something uniquely joyful. We show up every day for 24 days and share what we’ve learned, built, discovered, and struggled with. It’s one of my favorite traditions, because it feels like a collective “closing of the year” where we all learn from each other one last time before the holidays.
This year feels particularly special for me. I spent most of the past months writing and publishing Applied AI for Enterprise Java Development, a book that tries to make sense of this AI wave from a developer’s point of view. Not the hype, not the hand-waving, but the real work we do when we integrate LLMs into production systems. It’s been a year of experiments, late-night debugging, mistakes, surprises, and quite a few breakthroughs. And now, opening this Advent Calendar, I can’t help but see how far the Java ecosystem has already come.
Java developers didn’t sit back and wait for AI to “happen to them.”
We did what we always do: we tested, we validated, we measured, we built frameworks, we added structure, and we created patterns that teams can actually use in the real world. Today, Quarkus, LangChain4j, and the broader Java ecosystem make it possible to build AI-infused systems without giving up the reliability and discipline we depend on.
And that’s why I’m thrilled to start this calendar with you.
For the next 24 days, you’ll see creative ideas, deep dives, practical guides, experiments, and some truly unexpected topics. And all for them written by people who care about this craft as much as you do.
So grab your favorite hot drink, take a breath, and enjoy the first door of the 2025 Java Advent Calendar.
There’s a lot of good stuff waiting behind the others.
Models as Services. The Simplest Pattern Still Matters
The official LangChain4j tutorials start with the simplest possible shape: a chat model that you call like a remote service. That’s intentional. It reinforces the pattern Java developers already understand: the model is an external dependency with its own behavior, latency, and failure modes.
Here is the Quarkus version of the official chat example (adapted from the “Chat with LangChain4j” and “AI Services” tutorials):
import dev.langchain4j.service.AiService;
@AiService
public interface Assistant {
String chat(String message);
}
Quarkus wires the model for you through configuration:
# application.properties
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.chat-model.model-name=gpt-4o-mini
You inject it exactly like you inject any other CDI bean:
@Path("/chat")
public class ChatResource {
@Inject
Assistant assistant;
@GET
public String chat(@QueryParam("q") String q) {
return assistant.chat(q);
}
}
This example looks trivial, but it captures a foundational rule:
the model is not embedded; the model is a service.
And this is where Java already shines: retries, timeouts, circuit breakers, metrics, structured logs. All of those all apply cleanly to LLM calls.
Practical Retrieval-Augmented Generation
The official LangChain4j RAG tutorial shows the basic pattern:
- 1.Split documents into text segments
- 2.Embed those segments
- 3.Store them in an embedding store
- 4.At query time:
- embed the question
- retrieve relevant segments
- combine into a prompt
- send to the model
Here is the Quarkus version of that exact flow:
import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.data.embedding.Embedding;
import dev.langchain4j.store.embedding.EmbeddingStore;
import dev.langchain4j.model.embedding.EmbeddingModel;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
@ApplicationScoped
public class RagService {
@Inject
EmbeddingStore<TextSegment> store;
@Inject
EmbeddingModel embeddingModel;
public String buildPrompt(String question) {
// 1. Embed the question
Embedding queryEmbedding = embeddingModel.embed(question).content();
// 2. Retrieve relevant context
var matches = store.findRelevant(queryEmbedding, 3);
// 3. Combine retrieved text into a simple prompt
StringBuilder sb = new StringBuilder();
for (var match : matches) {
sb.append(match.embedded().text()).append("\n\n");
}
sb.append("Question: ").append(question);
return sb.toString();
}
}
And the usage in a resource:
@Path("/help")
public class HelpResource {
@Inject
RagService rag;
@Inject
Assistant assistant;
@GET
public String help(@QueryParam("q") String question) {
String prompt = rag.buildPrompt(question);
return assistant.chat(prompt);
}
}
This pattern is simple retrieval, simple prompt construction, no ceremony.
And importantly: you now have a deterministic pipeline around the model. That pipeline is what you test, observe, and control.
Guardrails Safeguarding Input and Output
LangChain4j provides two clear mechanisms:
- @InputGuardrails
- @OutputGuardrails
Both integrate directly with Quarkus.
Here’s the Quarkus version:
Input Guardrail
import dev.langchain4j.guardrail.InputGuardrail;
public class NoEmptyInputGuardrail implements InputGuardrail {
@Override
public void validate(String input) {
if (input == null || input.isBlank()) {
throw new IllegalArgumentException("Input must not be empty");
}
}
}
Output Guardrail
import dev.langchain4j.guardrail.OutputGuardrail;
public class JsonMustContainSummary implements OutputGuardrail<String> {
@Override
public void validate(String output) {
if (!output.contains("summary")) {
throw new IllegalStateException("Model output missing 'summary' field");
}
}
}
Wiring them into the AI service
@AiService
public interface StructuredAssistant {
@InputGuardrails(NoEmptyInputGuardrail.class)
@OutputGuardrails(JsonMustContainSummary.class)
String answer(String question);
}
And Quarkus adds configuration-driven retries:
quarkus.langchain4j.guardrails.max-retries=2
Guardrails feel like an entirely new concept for many Java developers, but the structure is familiar: It’s validation, just on the other side of the API boundary.
Testing & Evaluation
If we think about Testing in general, and merge this with requirements coming in for large language models, we need to think about:
- testing deterministic components
- testing guardrails
- testing model interaction in a black-box fashion
- using curated prompt sets
- evaluating output structure, not exact wording
Here is a Quarkus version :
Testing Guardrails
@QuarkusTest
public class GuardrailTest {
@Inject
StructuredAssistant assistant;
@Test
void emptyInputShouldBeRejected() {
assertThrows(IllegalArgumentException.class, () -> {
assistant.answer(" ");
});
}
}
Testing RAG logic (deterministic)
@QuarkusTest
public class RagTest {
@Inject
RagService rag;
@Inject
EmbeddingStore<TextSegment> store;
@Test
void retrievalShouldReturnRelevantText() {
store.add(TextSegment.from("Java 21 is the current LTS release."),
EmbeddingModel.miniLm().embed("Java").content());
String prompt = rag.buildPrompt("What is the current Java LTS?");
assertTrue(prompt.contains("Java 21"));
}
}
Opaque-box testing of the assistant
@QuarkusTest
public class AssistantIT {
@Inject
Assistant assistant;
@Test
void modelShouldProduceNonEmptyAnswer() {
String response = assistant.chat("Hello!");
assertFalse(response.isBlank());
}
}
Don’t test phrasing; test structure and expectations!
This keeps tests stable while still verifying quality and behavior.
A Festive Opening for the 24 Days Ahead
That brings us to why this article exists: to open another year of the Java Advent Calendar.
December has a special rhythm in the Java community. The year is winding down, code freezes are happening, and teams start reflecting on what actually mattered. And into that atmosphere comes a wave of articles. 24 voices, 24 perspectives, 24 stories from across our ecosystem.
- Some will dive deep into AI.
- Some will explore the core of the JVM.
- Some will share practical lessons from the year’s real-world battles.
- Some will remind us why Java continues to thrive, evolve, and surprise us.
This opening post is just the prologue.
Over the next three weeks, you’ll see what people across the community are experimenting with, breaking, fixing, and learning. You’ll see new tools, new techniques, old wisdom rediscovered, and perhaps a few things you’ll want to try during the quieter days between the holidays.
Whatever this season means to you, whether it’s festive, reflective, or simply a much-needed breather, I hope these articles offer inspiration and maybe a spark for your next project. Java has always been about community, and the Advent Calendar remains one of the warmest reminders of that.
So let’s open the first door together.
Twenty-three more to go.
Let’s enjoy the season, and write some good Java along the way.
Author: Markus Eisele
Technology leader focused on Java and open-source. Java Champion with 20+ years guiding monolith-to-microservices transitions. Developer Advocate and Senior Product Manager at IBM Research.
