A practical lesson on event sourcing as a system-of-record pattern, including replay, aggregate reconstruction, permanence, and the operational cost of event history.
Event sourcing stores domain changes as an ordered sequence of events rather than storing only the latest state snapshot. The current state of an entity or aggregate is reconstructed by replaying those events in order. That gives the system a durable history of how state changed, not just what the current value happens to be now.
This is useful in domains where history is part of the business truth. Financial ledgers, inventory movement, workflow progression, entitlement changes, and other time-sensitive domains often care deeply about how something became true. Event sourcing preserves that path naturally. The cost is that the event history becomes part of the permanent contract surface and operating model. The event log is no longer just integration output. It becomes the source of truth.
flowchart LR
C["Command"] --> V["Validate business rule"]
V --> E["Append domain event"]
E --> R["Rehydrate aggregate by replay"]
E --> P["Projection builders"]
R --> S["Current write-side state"]
What to notice:
In a CRUD model, the system usually stores the latest state directly and may keep audit logs as a secondary concern. In event sourcing, that relationship flips. The stream of domain events is the record of truth. Current state is an interpretation of that history.
That has several consequences:
This is why event sourcing should not be adopted casually. It is not only a persistence optimization. It is a different way of defining truth.
A common implementation model is to replay all events for one aggregate to reconstruct its current state. In small histories, that can be straightforward. In long histories, teams often introduce snapshots: persisted checkpoints that let replay start from a later point rather than from the beginning every time.
Snapshots improve performance, but they do not replace the event log. They are derived acceleration artifacts, not the primary source of truth.
1type AccountEvent =
2 | { eventName: "account.opened"; data: { accountId: string } }
3 | { eventName: "funds.deposited"; data: { amount: number } }
4 | { eventName: "funds.withdrawn"; data: { amount: number } };
5
6function applyAccountEvent(balance: number, event: AccountEvent): number {
7 if (event.eventName === "account.opened") return 0;
8 if (event.eventName === "funds.deposited") return balance + event.data.amount;
9 if (event.eventName === "funds.withdrawn") return balance - event.data.amount;
10 return balance;
11}
This example is intentionally small, but it shows the core idea: current state is a fold over event history.
When a team uses event sourcing, event design mistakes become harder to hide. A poor event name or ambiguous payload is not just inconvenient for one downstream consumer. It can affect:
That permanence is one reason event sourcing demands more modeling discipline than ordinary event publication used only for integration.
Another important shift is that business correction usually happens by appending new events, not by deleting or rewriting the old ones. If an order was charged incorrectly, the system may append a refund or adjustment event. That preserves historical truth while recording correction explicitly.
This is valuable for auditability and time-based reasoning, but it also means teams need comfortable, domain-appropriate correction models. If the organization really wants frequent silent mutation of past state, event sourcing may be a poor fit.
A CRUD-heavy internal admin system with modest audit requirements wants to adopt event sourcing because “it gives us flexibility later.” What should you ask first?
Ask what concrete domain need requires the event log itself to become the system of record. If the answer does not involve meaningful history, replay, temporal reasoning, or durable domain-event truth, the team is probably buying significant complexity without proportionate business value.