Learn what purity really means in Clojure, how side effects change reasoning and testing, and how to separate calculation from effect without pretending effects disappear.
Pure function: A function that returns the same result for the same inputs and does not cause observable side effects while doing so.
This idea matters because it changes how code behaves under testing, refactoring, and concurrency. If a function is pure, you can reason about it locally. If it mixes calculation with effects, you also need to reason about timing, environment, and surrounding state.
Clojure does not forbid effects. It encourages you to place them deliberately instead of letting them leak through every layer of the program.
Purity is not mainly about philosophy. It is about control over reasoning.
If a function is pure:
1(defn monthly-total [items]
2 (reduce + (map :amount items)))
3
4(monthly-total [{:amount 10} {:amount 25} {:amount 5}])
5;; => 40
That function is easy to reason about because all of its relevant behavior is contained in its arguments and return value.
An effect is anything that reaches beyond pure value calculation.
Common examples:
Some of these are unavoidable in real systems. The goal is not to eliminate them completely. The goal is to keep them at boundaries where the rest of the code stays easy to understand.
One of the best Clojure habits is to keep the core decision logic pure and move effects to the edge.
1(defn normalize-order [order]
2 (-> order
3 (update :customer-name clojure.string/trim)
4 (update :items vec)
5 (assoc :status :validated)))
6
7(defn save-order! [db order]
8 ;; pretend database write
9 {:db db :saved order})
normalize-order is pure. save-order! is effectful. That split matters because the business logic can now be tested and reused without pretending the database exists in every unit test.
The visual below shows the shape of that boundary.
flowchart LR
A["Request / event / input"] --> B["Pure calculation layer"]
B --> C["Validated / transformed value"]
C --> D["Effect boundary: DB, log, HTTP, file, queue"]
Purity is closely tied to referential transparency, which means an expression can be replaced by its value without changing program behavior.
1(+ (* 2 3) 4)
If (* 2 3) is pure, you can replace it with 6 and the program still behaves the same.
That seems simple, but it becomes very powerful in larger programs. It means expressions are dependable pieces of reasoning, not hidden triggers for unrelated state changes.
Consider two functions that both seem harmless:
1(defn discount-price [price percent]
2 (- price (* price percent)))
3
4(defn discount-price-with-log [price percent]
5 (println "discounting" price "by" percent)
6 (- price (* price percent)))
The second function still returns the same numeric answer, but it is no longer pure because calling it changes the outside world by writing to stdout. That matters in tests, composition, and reuse.
It is easy to overcorrect and act as if every effect is bad. That is not how real software works. Programs must communicate with the world.
The stronger design question is:
Where should the effect live, and how much pure logic can I keep around it?
For example:
This usually leads to better boundaries than trying to make everything look abstractly functional.
A common source of accidental impurity is hidden dependence on time or random values.
1(defn expires-at []
2 (.plusDays (java.time.Instant/now) 7))
This depends on the clock, so it is not pure. A cleaner design passes the current time in:
1(defn expires-at [now]
2 (.plusSeconds now (* 7 24 60 60)))
Now the calculation can be tested deterministically, and the effect of reading the clock happens at a boundary.
Exceptions are not automatically forbidden, but using them as routine control flow makes reasoning harder because the function no longer behaves like a simple value transformation.
In many Clojure systems, expected failures are easier to compose when represented as data:
1(defn parse-port [s]
2 (let [n (parse-long s)]
3 (cond
4 (nil? n) {:ok? false :error :not-a-number}
5 (<= 1 n 65535) {:ok? true :value n}
6 :else {:ok? false :error :out-of-range})))
That approach keeps the function pure and lets the caller decide where and how to surface the problem.
Pure functions are easier to test because the test only needs inputs and expected outputs.
1(deftest monthly-total-test
2 (is (= 40
3 (monthly-total [{:amount 10}
4 {:amount 25}
5 {:amount 5}]))))
Once effects enter the picture, tests often need:
That does not make effectful code wrong. It just means you should not merge effectful concerns into every function unless you have to.
Ask these questions when shaping a function:
Those questions often improve design faster than trying to label everything as “pure” or “impure” in the abstract.
A helper that “just formats data” but also writes logs, touches metrics, or reads the clock is much harder to reuse than it first appears.
Real systems need effects. The goal is not purity theater. The goal is a strong separation between calculation and effect.
If half the functions in a codebase call now or generate IDs directly, testing and debugging get harder fast. Prefer passing those values in from the edge.
A team has a function that validates an order, computes totals, writes an audit log, reads a discount configuration file, and stores the final record in the database. The function works, but tests are slow and brittle.
What is the stronger refactor?
The stronger refactor is to split the function into a pure calculation core and a thin orchestration layer. Validation, total calculation, and discount application can usually be pure once the inputs are supplied. File I/O, logging, and persistence can remain in the outer layer.