Code Coverage Analysis with Cloverage

Learn how to use Cloverage and coverage reports honestly in Clojure projects, including what coverage can and cannot tell you and how to use thresholds without turning them into theater.

Code coverage: A measurement of which parts of the code were executed during a test run. It shows reach, not correctness.

Cloverage can be useful in Clojure projects, but only if the team treats coverage as a diagnostic signal rather than a moral score. A high number can still hide weak assertions. A lower number can still be acceptable if the unexecuted code is low-risk or hard to test honestly.

What Coverage Can and Cannot Tell You

Coverage is good for questions like:

  • which namespaces are barely exercised?
  • which branches never ran?
  • did the new tests even reach the code they claim to protect?

Coverage is bad for questions like:

  • is the behavior correct?
  • are the assertions meaningful?
  • will the system survive integration failures?

That distinction is the whole game. Treat coverage as evidence about reach, not proof about quality.

Where Cloverage Fits

Cloverage is useful when you want:

  • a quick map of neglected code
  • visibility into branches that tests never reach
  • a CI guardrail against accidental coverage collapse

In legacy Leiningen-based projects, the plugin path is still straightforward. In newer CLI-first projects, teams often wrap coverage collection in a dedicated script, alias, or build task so it fits their existing workflow. The exact command matters less than keeping the workflow repeatable.

Read the Report Like an Engineer

A useful coverage review asks:

  • is this low-coverage area risky?
  • is it dead code?
  • is it generated or wiring-heavy code that should not dominate attention?
  • are we missing assertions or missing execution?

If a business-critical rules namespace is barely covered, that is important. If an infrequently used thin adapter is lower, the response may be different.

Thresholds Should Be Guardrails, Not Theater

Coverage thresholds can help, but only if they are used carefully.

Reasonable uses:

  • prevent large accidental regressions
  • keep obviously neglected areas visible
  • enforce a minimum standard for high-risk modules

Bad uses:

  • treating one global percentage as the quality target
  • encouraging pointless tests just to move the number
  • blocking useful refactors over tiny coverage fluctuations

A threshold should guide attention, not replace judgment.

Coverage Review Should Follow Risk

The best teams review coverage by risk category:

  • core domain logic
  • parsing and validation
  • persistence and integration boundaries
  • low-risk glue code

This keeps energy focused on the code where untested behavior is most dangerous.

Common Mistakes

Coverage work goes bad when teams:

  • optimize for percentage instead of protection
  • ignore branch behavior and look only at the headline number
  • count generated or low-value code the same as critical rules
  • stop once the threshold passes

Good coverage practice creates better questions. Bad coverage practice creates vanity metrics.

Key Takeaways

Use Cloverage to see where tests reach and where they do not. Then decide what matters by risk, not by raw percentage. Coverage is useful as a guide for investigation, but it is never the same thing as correctness.

Ready to Test Your Knowledge?

Loading quiz…
Revised on Thursday, April 23, 2026