Defer expensive resource creation in Clojure with delay and force without hiding lifecycle or ownership.
Lazy initialization means delaying the creation of a value until it is actually needed. In Clojure, the standard tool for that is delay, with force or deref used to realize the value.
This is a genuinely useful creational pattern in Clojure because many applications have expensive resources that should not be created eagerly:
The pattern is simple, but the design question is not just “Can I delay this?” It is also “Will the ownership and lifecycle stay clear once I do?”
1(defn load-rules []
2 (println "Loading rules...")
3 {:version 3 :rules [:a :b :c]})
4
5(def rules
6 (delay (load-rules)))
7
8;; Not loaded yet
9
10@rules
11;; Loads once, then reuses the realized value
delay evaluates its body at most once. After realization, dereferencing it returns the cached value.
That makes it a good fit when:
delay Is Better Than Ad-Hoc ChecksPeople sometimes recreate lazy initialization with an atom and a nil? check. That usually makes the code more error-prone than it needs to be. delay already expresses the intent:
That is clearer than hand-rolled state coordination.
1(defn create-search-client []
2 {:endpoint "https://search.example.com"
3 :status :connected})
4
5(defonce search-client
6 (delay (create-search-client)))
7
8(defn run-search [query]
9 (let [client @search-client]
10 {:query query
11 :client-status (:status client)}))
The search client is created only when the first search runs. The code also makes the shared nature explicit: this is one delayed resource owned by the namespace, not a hidden background side effect.
force vs DerefIn practice, deref with @ is the most common way to realize a delay:
1@search-client
force is useful when you want the intent to read explicitly as “realize this delayed value”:
1(force search-client)
Both are fine. The real design issue is not syntax but scope. Readers should be able to tell whether the delay is local, shared, or part of a larger system assembly.
Use it when:
Do not use it automatically. If a resource should fail fast at startup rather than later on first use, eager initialization may be the better choice.
The first mistake is delaying something that should fail immediately. If your application cannot function without a resource, lazy failure may make operational diagnosis worse.
The second mistake is hiding the delay deep inside business logic. Lazy initialization is best kept near a boundary namespace or system assembly point.
The third mistake is confusing delay with per-call laziness. delay is for one cached realization, not for recomputing a fresh value every time.
delay often replaces the need for a traditional singleton implementation. If the real problem is “initialize once, when first needed,” delay is usually the cleanest answer. It says exactly what the runtime behavior is without recreating class-based singleton machinery.
Before using delay, ask:
Those questions keep lazy initialization useful instead of surprising.