Computing expensive results ahead of time and storing them as reusable read models or materialized outputs.
Precomputation and materialization shift expensive work out of the request path entirely. Instead of waiting until a caller needs the answer, the system computes it ahead of time and stores the result as a reusable artifact. That artifact may be a materialized view, a generated file, a pre-ranked list, or a cached read model.
This pattern is often stronger than ordinary on-demand caching when the computation is expensive and predictable. The cost is that the system now owns a refresh pipeline, not just a cache entry. Freshness becomes a scheduling or eventing question rather than a simple TTL question.
flowchart LR
A["Source data or events"] --> B["Background computation"]
B --> C["Materialized output"]
C --> D["Fast read path"]
This matters because some work is simply too expensive to leave on the hot path. Aggregations, ranking models, search indexing, reporting snapshots, and document generation are common examples. If the answer is needed often enough, precomputing it can turn an impossible request-time cost into a cheap read.
Precomputation is strong when:
It is weaker when every request is highly customized or when the data must be exact in real time.
This job definition sketches a precomputed bestseller list that is refreshed outside the request path.
1materialized_outputs:
2 bestseller_list:
3 sources:
4 - orders
5 - returns
6 - product_status
7 refresh_mode: event-plus-scheduled
8 max_staleness_seconds: 300
9 output_key: catalog:bestsellers:global:v2
What to notice:
The common mistake is to think of precomputed artifacts as “just another cache.” In practice they are closer to a mini read model. They have:
That means the operational model is often closer to a small data pipeline than to a simple lookup cache.
When is precomputation stronger than on-demand memoization?
The stronger answer is when the work is expensive, predictable, and shared by many readers, so moving it out of the request path is worth the cost of a refresh pipeline and bounded staleness.