Stateful Assumptions in Stateless Compute

Show the dangers of assuming local memory, local disk, or execution reuse as durable state. This anti-pattern is subtle and common.

Stateful assumptions in stateless compute are one of the most subtle and common serverless failure modes. A function may appear to keep data in memory, reuse a temporary file, or rely on a warm runtime instance across several invocations. Sometimes that works in light testing, which makes the anti-pattern even more dangerous. The problem is that none of those behaviors are a safe source of business truth.

Serverless runtimes may reuse execution environments, but they are not a durability guarantee. Memory can disappear, instances can scale out, local disk can be isolated to one runtime, and retries may land on a completely different worker. Treating those optimizations as durable state leads to data loss, duplicate processing, and unpredictable behavior under load.

    flowchart LR
	    A["Invocation 1"] --> B["Runtime instance A"]
	    C["Invocation 2"] --> D["Runtime instance B"]
	    B --> E["Local memory or local disk"]
	    D --> F["No access to prior local state"]

What to notice:

  • runtime reuse is possible but not guaranteed
  • later invocations may land on a different instance entirely
  • local state can help optimization, but it should not hold correctness-critical truth

Where This Anti-Pattern Shows Up

Common examples include:

  • using a global map as a durable deduplication store
  • writing workflow checkpoints to local disk
  • assuming a previous invocation’s temporary file will exist later
  • relying on warm in-memory caches for correctness decisions

These often appear because the optimization works during development or at low concurrency.

 1const processedOrders = new Set<string>();
 2
 3export async function handler(event: { orderId: string }) {
 4  if (processedOrders.has(event.orderId)) {
 5    return { skipped: true };
 6  }
 7
 8  await billingService.charge(event.orderId);
 9  processedOrders.add(event.orderId);
10  return { skipped: false };
11}

What this demonstrates:

  • the deduplication rule exists only in one runtime instance
  • retries on a new instance can still reprocess the same order
  • correctness depends on reuse behavior the platform never promised

Optimization vs Source of Truth

There is nothing wrong with warm caches or temporary local files when used as optimization aids. The problem is using them as the only source of truth. A healthy rule is:

  • local memory may speed things up
  • durable state must live in durable storage

That usually means:

  • checkpoints in a state store
  • idempotency keys in a durable store
  • workflow progress in a database or workflow engine
  • reusable artifacts in object storage

Common Mistakes

  • confusing warm-runtime reuse with durability
  • storing workflow progress only on local disk
  • letting in-memory caches decide correctness-critical behavior
  • testing only under one runtime instance and concluding the design is safe

Design Review Question

A queue consumer keeps a local in-memory set of processed event IDs to avoid duplicates. It works well in low-volume tests but creates duplicate invoices during retries and scale-out. What is the real design flaw?

The stronger answer is that deduplication state was implemented as a runtime optimization instead of a durable contract. The fix is a durable idempotency or deduplication store, not “hope for warmer instances.”

Check Your Understanding

Loading quiz…
Revised on Thursday, April 23, 2026