Learn when Clojure's `memoize` is enough, when it is too blunt, and how to think about key shape, eviction, staleness, and concurrency before turning repeated work into cached state.
Memoization: Caching a function result so repeated calls with the same arguments can reuse prior work instead of recomputing it.
Memoization is attractive because it looks almost free:
1(def fast-fib (memoize slow-fib))
But real caching decisions are never free. Once a cache exists, you now own:
So the right question is not “can I memoize this?” It is “is caching the right model for this work?”
memoize Is a Narrow ToolClojure’s built-in memoize is process-local and unbounded. That makes it excellent for a small set of cases and risky for many others.
It is strongest when:
It is much weaker when:
In other words, built-in memoize is best for deterministic computational reuse, not as a drop-in application cache for everything expensive.
A cache keyed by a giant nested request map is usually worse than a cache keyed by a stable compact identifier. Review:
Bad key shape is one of the fastest ways to get a cache that barely hits and still consumes memory.
1(defn report-key [{:keys [tenant-id report-date locale]}]
2 [tenant-id report-date locale])
This kind of compact key is easier to reason about than caching directly on a huge request map with irrelevant fields.
Memoization is easiest when the function is timeless. Many business functions are not. If data changes underneath the function, then the cache needs a policy:
That is why generic memoization is rarely the whole answer for database-backed or API-backed results.
Once a result depends on mutable outside state, the design question changes from “can I remember it?” to “what makes this remembered value still valid?”
In a concurrent service, caching raises new questions:
Clojure’s concurrency story helps with safe sharing of immutable values, but it does not remove cache coordination concerns.
One of the most common failures is a cache miss stampede, where many callers all compute the same missing value at once. Sharing immutable results is easy; coordinating the miss path is the hard part.
Before picking a caching mechanism, decide where the cache actually belongs:
Those scopes solve different problems. Using process-local memoization for data that must stay coherent across many instances is often the wrong model from the beginning.
That trade is often worth it. But make it explicit. A cache that saves CPU by consuming unbounded memory is not automatically an optimization.
If the result depends on time, I/O, or mutable external state, generic memoization can become wrong quickly.
That turns performance optimization into a memory-retention bug.
The cache then becomes a stale data delivery system.
If the same key rarely appears twice, the cache adds complexity with little benefit.
Use built-in memoize only for pure functions with compact keys and naturally bounded reuse. For real application caches, decide the right scope first, then think carefully about key design, eviction, invalidation, and miss coordination. In Clojure, immutable values make sharing cache entries safer, but they do not make caching policy trivial.