Learn how Clojure services should own data, publish changes, build projections, and handle consistency without sliding back into a distributed monolith.
Data ownership: The rule that one service is authoritative for a fact even if other services cache, project, or consume it.
Data management is where many microservice architectures either become coherent or quietly collapse back into a distributed monolith. The key issue is not whether each service has a database. The key issue is whether each service truly owns its data and exposes changes through explicit contracts rather than through shared tables or hidden assumptions.
Teams often jump straight to words like “eventual consistency” or “CQRS” before deciding who actually owns what. That order is backward.
First answer:
Only then can you choose a synchronization strategy that makes sense.
When several services write the same tables directly, the architecture may look distributed while the real dependency graph still lives in the database. That creates familiar problems:
Database-per-service is not a purity test. It is a way to keep ownership and change boundaries explicit.
Many services need to read information that another service owns. That does not always mean they should synchronously query the owning service for every request.
Sometimes the stronger design is:
That trades immediate consistency for reduced runtime coupling and often better latency.
1(defn handle-order-created! [{:keys [projection-store]} event]
2 (upsert-order-summary! projection-store
3 {:order-id (:order/id event)
4 :customer-id (:customer/id event)
5 :state (:order/state event)}))
The consumer is not re-owning the order. It is maintaining a derived local view for its own use.
Distributed services make it much harder to rely on a single ACID transaction across all affected systems. That means many cross-service changes must be modeled as workflows instead of pretending one global commit exists.
Useful tools include:
The goal is not to eliminate inconsistency windows. The goal is to make them explicit and survivable.
One of the most dangerous designs is:
That creates silent divergence when the database commit succeeds but publication fails. Patterns such as an outbox or log-based publication path exist to close that gap.
Eventual consistency is not a license to ignore freshness requirements. Some workflows can tolerate lag. Others cannot.
The moment several services can update the same fact independently, conflict and ambiguity follow quickly.
Strong coordination across many services is expensive. Use it only where the business truly requires that level of coupling.
Name the owner of each critical fact. Prefer local projections for read-heavy consumer needs. Use event publication patterns that can survive failure. Design workflows with compensations and idempotency where strict cross-service transactions would be too expensive or fragile.