Learn how Clojure lazy sequences defer work, where infinite sequences are useful, and how to avoid head retention, chunking surprises, and accidental over-realization.
Lazy evaluation is one of the reasons Clojure sequence code can feel both elegant and surprising. You can describe a pipeline once, pull only the values you need, and even model streams that are conceptually unbounded. But laziness changes when work happens, not whether work exists.
Lazy sequence: A sequence whose elements are computed only when a consumer demands them, rather than all at once up front.
That distinction matters because many idiomatic Clojure functions such as map, filter, remove, take, and drop produce lazy results. If you understand realization well, you can process large data efficiently. If you do not, you can end up with hidden side effects, retained memory, or pipelines that realize far more work than intended.
Lazy pipelines let you describe transformations without forcing all computation immediately.
1(->> (range)
2 (filter odd?)
3 (map #(* % %))
4 (take 5))
5;; => (1 9 25 49 81)
That expression does not build every odd number and then every odd square. It realizes just enough source values to satisfy take 5.
This is useful when:
Laziness is often about avoiding unnecessary work, not just about elegance.
Clojure provides several standard building blocks:
range for numeric progressionsiterate for repeated application of a step functionrepeatedly for calling a supplier function over and overcycle for looping through a finite collection indefinitelymap, filter, and removelazy-seq when you need to define a custom lazy recursive producer 1(take 6 (range))
2;; => (0 1 2 3 4 5)
3
4(take 5 (iterate #(* 2 %) 1))
5;; => (1 2 4 8 16)
6
7(take 4 (repeatedly #(rand-int 10)))
8;; => (7 1 3 8)
9
10(take 7 (cycle [:a :b :c]))
11;; => (:a :b :c :a :b :c :a)
When you need custom recursion, lazy-seq makes the delay explicit:
1(defn fibs
2 ([] (fibs 0 1))
3 ([a b]
4 (lazy-seq
5 (cons a (fibs b (+ a b))))))
6
7(take 10 (fibs))
8;; => (0 1 1 2 3 5 8 13 21 34)
Prefer built-in sequence producers when they already express the idea clearly. lazy-seq is powerful, but it is not the idiomatic starting point for every stream-like problem.
The main mental model is simple: a lazy pipeline is mostly a promise of future work until something realizes values.
Common realization points include:
first, next, rest, nthtake, drop, take-while, somereduce, into, countdoall and dorun 1(def candidates
2 (->> (range)
3 (filter odd?)
4 (map #(* % %))))
5
6(take 3 candidates)
7;; realizes only enough work for 3 values
8
9(into [] (take 1000 candidates))
10;; realizes 1000 values immediately
If you already know you will realize everything into a final collection, laziness may still be fine, but it is no longer the key benefit. In those cases, a transducer or direct reduction can sometimes be cheaper and clearer.
Infinite producers are practical only when the consumer knows how to stop.
1(take 10 (filter even? (range)))
2;; safe
3
4(some #(when (> % 1000) %) (range))
5;; safe
6
7(count (range))
8;; never finishes
The idiomatic pattern is:
If both sides are unbounded, the program will keep asking for more values forever.
Some lazy sequence operations realize values in chunks for efficiency. That can surprise you if you expected element-by-element timing, especially when debugging or when a mapped function has visible effects.
1(take 1 (map #(do (println "processing" %) %) (range 100)))
You may see more than one processing line. That is a reminder that lazy sequence pipelines are not a precise control-flow tool for side effects.
Lazy sequences cache realized elements. If long-lived code keeps a reference to the beginning of a sequence while walking far forward, earlier realized values can remain reachable in memory.
1(def xs (map expensive-op records))
2
3;; some other part of the system keeps `xs`
4;; while the pipeline slowly walks through it
That is why laziness is not automatically “low memory”. If you only need one pass, a reduction or transducer pipeline may avoid retaining old realized values.
Lazy transformations and side effects do not mix well. Realization may happen later than expected, in larger chunks than expected, or in places that make debugging confusing.
1;; Better: pure transformation first, explicit effects later
2(def emails
3 (->> users
4 (map :email)
5 (remove nil?)))
6
7(run! send-reminder! emails)
run!, doseq, reduce, and other explicit boundary operations make effect timing far clearer than embedding side effects in a lazy map.
Use lazy sequences when:
Prefer another approach when:
graph TD;
A["Producer (`range`, `map`, `filter`)"] --> B["Lazy pipeline exists"]
B --> C{"Consumer asks for values?"}
C -->|No| D["No work yet"]
C -->|Yes| E["Realize just enough work"]
E --> F{"Consumer bounded?"}
F -->|Yes| G["Stop after needed values"]
F -->|No| H["Keep realizing more values"]
The diagram below captures the real lesson: laziness changes when work happens, and careful consumers determine how much work ultimately gets done.