Site icon JVM Advent

A Sneak Peek at The Java Performance Toolbox

Performance, performance, performance – applications need performance! We hear, see, and breathe that with every change in Java applications. But making such changes in our code should result from a careful analysis: knowing what happened to the application and environment during a specific time. Understanding what is going on in your Java application is crucial when it comes to improving the performance of your application. A performance analysis needs tools, and this article gives you tips on using JDK tools like jcmd, jconsole, jstat, jmap, etc., to gain insights on classes and threads and perform live GC analysis or heap dump processing.

Command-line monitoring tools shipped with every JDK can help you get a better understanding about:

  • Basic virtual machine (VM), class, and thread information
  • Live garbage collection (GC) analysis
  • Capturing heap dumps for further processing

Let’s explore how tools like jps, jcmd, jinfo… can help you with that.

Querying The Running Java Processes

When looking into performance, would be great to first get a list of the Java processes that run on target host. You can list those instrumented JVMs by using a tool named jps:

jps #list JVMS on localhost
23296 Jps
23276 example-nima-reactive.jar
23294 NimaMain

The right column of the output of this command shows the class or jar name, application or virtual arguments, and the left column contains the local virtual machine identifier (lvmid). Often the lvmid is the same as with the operating system process ID.

If you want to use jps output in your scripts, you can add the -q option to produce only the JVM identifiers. Also, you can run jps targeting a remote host by providing the host identifier using the syntax of an URI.

Ensure that the local host has the appropriate permissions to access the remote host and jstatd server is running on the target location, with an internal RMI registry bound to an open port.

The lvmid can further help you find out the uptime of the JVM by running the following:

jcmd 23276 VM.uptime

23276:

6642.570 s

When querying details about a Java process, you can also run jcmd using the main class name:

jcmd NimaMain VM.uptime

23294:

32839.931 s

In the following section, we will look closely at how you find JVM information and use that to tune its flags or detect memory issues.

Fine Tuning JVM Flags and Diagnosing Memory Leaks

jcmd is a utility that can send diagnostic commands to a running JVM, and you should use this tool on the same machine as the JVM is running on. This powerful CLI tool can list all the JVM processes running on the local machine, offer you basic VM information about JVM tuning flags in use or JVM system properties, statistics about heap usage, managing a flight recording, etc. 

Overusing jcmd to send diagnostic commands can affect the performance of the VM. 

Probably you’ve seen the syntagm “reduce the memory footprint,” and you are wondering how to detect the current memory usage. For a Java application, the heap is the most significant memory consumer, but the JVM uses memory for its internal operations, and this non-heap memory is called native memory. You can use jcmd to find out more details about the native memory and, based on those, tune its usage or detect memory leaks. 

You can find out how much native memory you are using by running the following: 

jcmd 23276 VM.native_memory
23276:
Native memory tracking is not enabled

If native memory tracking is not enabled, you will receive the following message: Native memory tracking is not enabled. jcmd can offer you details about all the flags running on a JVM and thus detect if NativeMemoryTracking is enabled :

jcmd 23276 VM.flags #show the tuning flags and their value

If you want to inspect an individual flag’s value, you can use another JDK CLI tool called jinfo. By running the following jinfo command, you can print information about a Java configuration for a specific Java process:

jinfo -flag NativeMemoryTracking 23276 
-XX:NativeMemoryTracking=off

To enable native memory tracking, you should restart your application using an additional VM argument: –XX:NativeMemoryTracking (NMT):

Please consider that having NMT enabled can add application performance overhead.

Once you enabled native memory tracking, the output of
jcmd 23276 VM.native_memory #print native memory usage

will include a usage summary of JVM native memory types like:

  • Class which defines the JVM memory used to store class metadata.
  • Thread defines the memory used by application threads.
  • Code offers details about the memory used to store JIT-generated code
  • Compiler and GC space usage etc.

One of the easiest way to identify a memory leak in your JVM is by running jcmd with VM.native_memory option to first get a baseline:

jcmd 23276 VM.native_memory baseline

This will create a snapshot of the current memory usage to be compared with the later in time usage using summary_diff option:

jcmd 23276 VM.native_memory summary.diff

If diff reports show a significant increase of memory usage in areas like Heap, Thread, Code or Class, this can be a memory leak issue.

IntegratiNG JVM Monitoring Tools

Keeping track of the JVM flags or changes in native memory usage can be daunting, but you can easily integrate jcmd with Java Flight Recorder (JFR). JFR is a profiling and event collection framework built into the JDK and you can use it to gather low-level details about how the JVM and Java applications behave. You can generate a JFR file using jcmd via:

jcmd 23276 JFR.start name=example_recording 
delay=10s duration=20s filename=./examplerecording.jfr

The previous command creates a sample JFR recording file named examplerecording.jfr in the same location as the jar application. The recording starts in 10s after launching the command, captures events during 20s and uses the default JFR settings. To stop the recording simply run:

jcmd 23276 JFR.stop name=example_recording

Moreover, starting with JDK 14 you can use JFR Event Streaming to integrate JFR with different metrics APIs and send JVM monitoring data directly to the monitoring service of choice.

How about visualizing the threads run by Java applications? jconsole displays information in real-time about the number of threads running in an application. Just run jconsole in a terminal window, and you will see the desktop tool popping up:

jconsole Threads view

jconsole Threads view

If you would like to take a closer look to the stack of running threads, jstack CLI tool can help you with that:

jstack 23276#prints Java stack traces of threads for this process

How about monitoring Garbage Collectors activity? jcmd has capabilities that include performing GC operations, but also collecting heap dumps:

jcmd 23276 GC.heap_dump # generate a JVM heap dumpx

Moreover, you can obtain statistics about garbage collectors by running jstat and print its output with -gcutil option:

jstat -gcutil 23276 10 250 #take 10 samples every 250ms

If you are interested in printing heap summaries or generating a heap dump you should give jmap a try:

jmap -histo 23276#print a histogram of the Java object heap
for this Java process
jmap -dump:file=heap.bin 23276#generate a heap dump in
heap.bin file

Conclusion

When writing this article, the latest JDK release is 19 and ships with more than 20 tools. These tools evolve rapidly with the Java landscape, and newer ones may supersede some. Gathering details using various CLI JDK tools allows you to obtain individual or time-distributed information about the JVM and Java applications’ performance. You can coordinate those pieces of information with JVM and infrastructure metrics collected by tools like Datadog or Prometheus. These actions help you understand how your application performed with specific JVM and infrastructure configurations, so you know where to apply improvements.

Author: Ana-Maria Mihalceanu

Ana is a Java Champion Alumni, Developer Advocate, guest author of the book “DevOps tools for Java Developers”, and a constant adopter of challenging technical scenarios involving Java-based frameworks and multiple cloud providers. She actively supports technical communities’ growth through knowledge sharing and enjoys curating content for conferences as a program committee member. To learn more about/from her, follow her on Twitter @ammbra1508 or on Mastodon @ammbra1508.mastodon.social.

Exit mobile version