Event Sourcing in Clojure

Model state as domain events, build projections with reducers, and understand the versioning, replay, and snapshot trade-offs of event-sourced Clojure systems.

Event sourcing stores the history of state transitions as an append-only stream of domain events. That makes Clojure a natural fit because events, commands, and projections can all be modeled as immutable data and pure transformations. The pattern is powerful, but only when you are willing to accept its real costs: replay logic, schema evolution, projection lag, and operational discipline around ordering and idempotency.

When Event Sourcing Is Worth It

Event sourcing is strongest when the history itself matters.

Typical fits:

  • financial or ledger-style systems
  • workflows where auditability is part of the domain
  • systems that need multiple read models from the same stream
  • domains where “how we got here” matters almost as much as current state

Poor fits:

  • CRUD-heavy systems where only current state matters
  • simple applications that would gain little from replay or audit trails
  • teams not prepared to operate projections, replays, and event versioning

Events Are Domain Facts, Not Change Logs

Useful events describe something meaningful that happened in the domain, such as :account-opened or :funds-deposited. They should not look like vague database diffs.

 1(defn event
 2  [event-type aggregate-id version data]
 3  {:event-id     (str (random-uuid))
 4   :type         event-type
 5   :aggregate-id aggregate-id
 6   :version      version
 7   :occurred-at  (java.time.Instant/now)
 8   :data         data})
 9
10(defn account-opened [account-id owner-id opening-balance]
11  (event :account-opened account-id 1
12         {:owner-id owner-id
13          :opening-balance opening-balance}))
14
15(defn funds-deposited [account-id version amount]
16  (event :funds-deposited account-id version
17         {:amount amount}))

That structure gives you:

  • a stable event identity
  • aggregate ownership
  • stream ordering within the aggregate
  • payload data separated from metadata

Separate Commands from Events

Commands ask for something to happen. Events record what did happen. In a Clojure system, this often means:

  1. validate a command
  2. load current aggregate state by replaying events or snapshot plus events
  3. decide which new event(s) should be emitted
  4. append them with optimistic concurrency checks
 1(defn apply-event [state {:keys [type data]}]
 2  (case type
 3    :account-opened
 4    {:status  :open
 5     :owner-id (:owner-id data)
 6     :balance (:opening-balance data)}
 7
 8    :funds-deposited
 9    (update state :balance + (:amount data))
10
11    :funds-withdrawn
12    (update state :balance - (:amount data))
13
14    state))
15
16(defn rebuild-account [events]
17  (reduce apply-event nil events))

That reducer is the center of the model. If it is deterministic and easy to read, replay, testing, and projection building all get easier.

Projections Are Derived Views

The event stream is not usually the fastest read model. Most systems build projections tailored to queries.

    flowchart LR
	    CMD["Command"] --> DECIDE["Domain decision"]
	    DECIDE --> STREAM["Append event stream"]
	    STREAM --> AGG["Rebuild aggregate"]
	    STREAM --> PROJ1["Balance projection"]
	    STREAM --> PROJ2["Audit timeline"]
	    STREAM --> PROJ3["Reporting read model"]

This design buys flexibility, but it also introduces eventual consistency. Your query model may lag behind the latest appended event. That is often acceptable, but it must be explicit.

Versioning, Ordering, and Idempotency

These are the operational pressure points of event sourcing.

Versioning

Event schemas change. You need a plan for older events:

  • upcasters during replay
  • version-specific handlers
  • backward-compatible payload additions where possible

Ordering

Event sourcing usually depends on a clear order within each aggregate stream. Cross-aggregate global ordering is much harder and often unnecessary.

Idempotency

Projections and downstream subscribers must often handle duplicate delivery. If replaying the same event corrupts the read model, the projection is fragile.

Snapshots Are an Optimization, Not the Source of Truth

Snapshots reduce replay cost for long streams, but they do not replace the stream.

1{:aggregate-id "acct-123"
2 :version 1287
3 :snapshot {:status :open
4            :owner-id "user-7"
5            :balance 4420}}

A snapshot is only useful if you can trust that:

  • it corresponds to a known event version
  • later events replay cleanly from that point
  • rebuilding without the snapshot still yields the same state

Common Mistakes

  • using technical update events instead of domain facts
  • letting projection schemas leak back into the event stream
  • skipping version metadata
  • assuming replay and subscriber processing will always happen exactly once
  • adopting event sourcing for simple CRUD because it feels architecturally advanced

The pattern is strongest when history is a first-class business concern, not when teams want an excuse to add Kafka and call it architecture.

Ready to Test Your Knowledge?

Loading quiz…
Revised on Thursday, April 23, 2026