Model state as domain events, build projections with reducers, and understand the versioning, replay, and snapshot trade-offs of event-sourced Clojure systems.
Event sourcing stores the history of state transitions as an append-only stream of domain events. That makes Clojure a natural fit because events, commands, and projections can all be modeled as immutable data and pure transformations. The pattern is powerful, but only when you are willing to accept its real costs: replay logic, schema evolution, projection lag, and operational discipline around ordering and idempotency.
Event sourcing is strongest when the history itself matters.
Typical fits:
Poor fits:
Useful events describe something meaningful that happened in the domain, such as :account-opened or :funds-deposited. They should not look like vague database diffs.
1(defn event
2 [event-type aggregate-id version data]
3 {:event-id (str (random-uuid))
4 :type event-type
5 :aggregate-id aggregate-id
6 :version version
7 :occurred-at (java.time.Instant/now)
8 :data data})
9
10(defn account-opened [account-id owner-id opening-balance]
11 (event :account-opened account-id 1
12 {:owner-id owner-id
13 :opening-balance opening-balance}))
14
15(defn funds-deposited [account-id version amount]
16 (event :funds-deposited account-id version
17 {:amount amount}))
That structure gives you:
Commands ask for something to happen. Events record what did happen. In a Clojure system, this often means:
1(defn apply-event [state {:keys [type data]}]
2 (case type
3 :account-opened
4 {:status :open
5 :owner-id (:owner-id data)
6 :balance (:opening-balance data)}
7
8 :funds-deposited
9 (update state :balance + (:amount data))
10
11 :funds-withdrawn
12 (update state :balance - (:amount data))
13
14 state))
15
16(defn rebuild-account [events]
17 (reduce apply-event nil events))
That reducer is the center of the model. If it is deterministic and easy to read, replay, testing, and projection building all get easier.
The event stream is not usually the fastest read model. Most systems build projections tailored to queries.
flowchart LR
CMD["Command"] --> DECIDE["Domain decision"]
DECIDE --> STREAM["Append event stream"]
STREAM --> AGG["Rebuild aggregate"]
STREAM --> PROJ1["Balance projection"]
STREAM --> PROJ2["Audit timeline"]
STREAM --> PROJ3["Reporting read model"]
This design buys flexibility, but it also introduces eventual consistency. Your query model may lag behind the latest appended event. That is often acceptable, but it must be explicit.
These are the operational pressure points of event sourcing.
Event schemas change. You need a plan for older events:
Event sourcing usually depends on a clear order within each aggregate stream. Cross-aggregate global ordering is much harder and often unnecessary.
Projections and downstream subscribers must often handle duplicate delivery. If replaying the same event corrupts the read model, the projection is fragile.
Snapshots reduce replay cost for long streams, but they do not replace the stream.
1{:aggregate-id "acct-123"
2 :version 1287
3 :snapshot {:status :open
4 :owner-id "user-7"
5 :balance 4420}}
A snapshot is only useful if you can trust that:
The pattern is strongest when history is a first-class business concern, not when teams want an excuse to add Kafka and call it architecture.