Invalidating collections, summaries, and parent views when lower-level source objects change.
Parent-child and hierarchical dependencies matter whenever a cache stores more than isolated records. A single product update may affect the product detail cache, the category page cache, the brand listing cache, the recommendation model snapshot, and a homepage summary. In those cases, invalidation is really about propagating trust changes through a dependency graph.
Teams often underestimate this part of caching. They model entity keys well, then discover that most user-visible freshness problems come from derived collections and parent views rather than from the leaf records themselves. A strong invalidation design has to decide which dependencies are explicit, which are approximate, and which are intentionally left to TTL.
flowchart TD
A["Product 42"] --> B["Category laptops page"]
A --> C["Brand overview page"]
A --> D["Search index shard"]
B --> E["Homepage featured laptops panel"]
C --> E
Caching an entity is usually easy. Caching a structure built from many entities is harder because correctness depends on several lower-level facts remaining aligned. The more aggregated the cached result becomes, the more likely it is that one source change affects more than one parent artifact.
This is where invalidation cost rises sharply. The system must either:
There is no free option. The right answer depends on how visible and costly stale parent views are.
A dependency-aware cache treats cached artifacts as nodes that may depend on other nodes. When a source object changes, the system decides how far the invalidation should travel upward.
1dependencies:
2 product:42:
3 affects:
4 - category-page:laptops
5 - brand-page:acme
6 - search-query:laptop+ssd
7 category-page:laptops:
8 affects:
9 - homepage-panel:featured-laptops
This model does not need to be perfectly graph-theoretic to be useful. Even a partial dependency registry can be enough to protect the most important parent views.
The following example shows an invalidation worker that cascades from a leaf object to several parent views.
1type DependencyStore = {
2 parentsOf(nodeId: string): Promise<string[]>;
3};
4
5type Cache = {
6 del(key: string): Promise<void>;
7};
8
9async function invalidateCascade(
10 changedNodeId: string,
11 dependencies: DependencyStore,
12 cache: Cache
13) {
14 const visited = new Set<string>();
15 const queue = [changedNodeId];
16
17 while (queue.length > 0) {
18 const node = queue.shift()!;
19 if (visited.has(node)) continue;
20 visited.add(node);
21
22 await cache.del(node);
23
24 const parents = await dependencies.parentsOf(node);
25 for (const parent of parents) queue.push(parent);
26 }
27}
What to notice:
Explicit dependency tracking is powerful, but it can become a system in its own right.
A good system does not try to model every dependency with equal detail. It models the dependencies whose staleness would actually matter to users or operators.
When should a team model parent-child dependencies explicitly instead of accepting coarser invalidation?
The stronger answer is that explicit dependency modeling is worth the cost when stale parent views are materially visible or risky and when the affected parent set can be discovered with reasonable confidence. If parent relationships are too dynamic or broad, a coarser purge or a shorter TTL is often the better engineering choice.