Learn how to investigate Clojure performance with low-overhead runtime evidence, using JDK Flight Recorder, JDK Mission Control, VisualVM, and deep profilers such as YourKit at the right stage of the workflow.
Profiling: Collecting runtime evidence about where an application spends time, allocates memory, blocks threads, or retains data so optimization work targets the real bottleneck.
The first rule of Clojure performance work is still the oldest one: measure before you optimize. The difference on the JVM is that “measure” can mean several different things:
Each tool answers a different question. Mixing them up is how teams end up benchmarking the wrong function or attaching a heavyweight profiler when a short flight recording would have been enough.
For most real systems, the first useful diagnostics layer is:
That sequence matters because a profiler is most useful after you already know which symptom you are trying to explain:
Without that story, a profiler can produce a lot of detail and very little clarity.
For modern JVM work, JDK Flight Recorder (JFR) and JDK Mission Control (JMC) are often the best first deep step because they are designed for low-overhead runtime diagnostics.
They are especially good for:
1java \
2 -XX:StartFlightRecording=filename=profile.jfr,duration=60s \
3 -jar app.jar
Afterward you can inspect the recording in JDK Mission Control or summarize it with the jfr command-line tools. This is often enough to tell you whether the problem is:
VisualVM remains useful when you want quick live access to:
It is convenient for local diagnosis and staging environments because it gives you a broad operational snapshot quickly. That makes it a good companion when the question is:
It is less about long-running production evidence and more about rapid interactive inspection.
YourKit is strongest when you need:
That makes it especially useful once you have already narrowed the problem to a small part of the system and need better local visibility than generic tooling gives you.
The key is to use it late enough that the extra detail actually answers a concrete question.
Tools such as criterium are still valuable, but only after you have isolated a local code path. They answer questions like:
They do not tell you whether the whole system is slow because of:
So the right order is usually:
Clojure-specific performance profiles often surface a familiar set of issues:
core.async or thread-pool workflowsThat means the best profiling outcome is rarely “rewrite everything.” It is usually a narrow, defensible change in one hot path.
Start with the least disruptive evidence that can answer the next question.
Benchmark results are helpful only after the real bottleneck is localized.
If the symptom is vague, the results usually stay vague too.
A hot method can still be downstream of a queueing, allocation, or dependency issue.
Start with system symptoms, then collect the lightest runtime evidence that can explain them. Use JFR and JMC as strong defaults for modern JVM diagnosis, VisualVM for fast live inspection, and deeper profilers such as YourKit when you need richer local analysis. Bring in microbenchmarks only after the hot path is already clear. In Clojure, profiling works best when it narrows the problem to a small, measurable design choice.