Browse Serverless Patterns and Anti-Patterns

Pattern Selection Matrix

Decision matrix for choosing serverless patterns by workload, latency, and operational risk.

Use this appendix when the workload is clear but the right serverless pattern is still uncertain. The goal is not to force one answer for every problem. It is to help narrow the design choice quickly by asking what kind of work is being done, whether the caller must wait, how stateful the process is, and what failure behavior has to be tolerated.

    flowchart TD
	    A["Start with the problem"] --> B{"Does the caller need an immediate response?"}
	    B -->|Yes| C["API handler or request path function"]
	    B -->|No| D{"Is the work triggered by an event or artifact?"}
	    D -->|Event| E["Event processor or queue consumer"]
	    D -->|Artifact| F["Object-triggered processing"]
	    E --> G{"Many parallel items?"}
	    G -->|Yes| H["Fan-out / fan-in workflow"]
	    G -->|No| I{"Several coordinated steps?"}
	    I -->|Yes| J["Workflow orchestration"]
	    I -->|No| K["Single event consumer"]

What to notice:

  • the first split is usually synchronous versus asynchronous
  • orchestration becomes relevant when coordination and state matter, not just when many functions exist
  • several strong designs combine patterns rather than choosing only one

How to Use the Matrix

Read the problem statement first, not the technology menu. Then look across four questions:

  1. Is the work request-response or asynchronous?
  2. Is the trigger a user action, a schedule, an event, or an artifact landing in storage?
  3. Does the work require one step, many parallel steps, or durable multi-step coordination?
  4. What failure mode matters most: latency, duplicate work, dependency overload, or partial completion?

The matrix below is meant to guide the first architecture decision, not to replace deeper review.

Pattern Selection Matrix

Problem shapeStrong default patternWhy it fitsWatch out for
User needs a fast synchronous responseAPI handlerKeeps the request path direct and observableLetting slow follow-up work stay on the synchronous path
User action creates slow background workAPI handler plus queue-backed workerAccept now, process later, and smooth burstsMissing idempotency or weak status tracking
One business fact should notify many downstream consumersEvent processor via topic or event busSupports fan-out and looser couplingWeak schema governance or unclear ownership
A file upload should trigger downstream processingObject-trigger processingObject storage fits artifacts and can trigger async work naturallyPutting query-critical business state only in object names
A workflow needs waits, branching, or human reviewWorkflow orchestrationMakes long-running state and control flow explicitHiding too much business logic inside opaque workflow definitions
Many independent items can be processed in parallel and later combinedFan-out / fan-in workflowImproves throughput while keeping aggregation explicitUnbounded parallelism or unclear completion rules
Work should happen on a fixed scheduleScheduled automationStrong for housekeeping, polling replacement, or periodic projection updatesUsing schedules for work that should be event-driven
Several local transactions may succeed before one later step failsWorkflow plus compensation patternModels distributed recovery explicitlyPretending compensation is the same as rollback
Repeated bursty jobs need bufferingQueue-triggered event processorSmooths producer spikes and gives retry controlOverloading downstream dependencies with too much consumer concurrency
Many consumers need the same filtered event streamEvent routing plus bounded consumersSupports decoupled reaction pathsShared event contracts with no versioning or trace context

Quick Pattern Cues

Use an API handler when

  • the caller needs an answer now
  • the request path should stay short and explicit
  • the work can complete within one bounded invocation

Use an event processor when

  • the work is naturally asynchronous
  • one system emits a fact and another reacts later
  • burst smoothing or loose coupling matters more than immediate response

Use fan-out / fan-in when

  • many items can be processed independently
  • total completion time matters
  • you can define clear aggregation rules for timeout, quorum, or partial success

Use orchestration when

  • the workflow has several stateful steps
  • waits, branching, retries, or manual review are part of the process
  • operators need visibility into current step and failure position

Use object-trigger processing when

  • the main artifact is a file, document, image, export, or binary object
  • direct upload or artifact-driven processing keeps the request path lighter
  • storage events are a natural activation boundary

Use scheduled automation when

  • the work is time-based, periodic, or cleanup-oriented
  • there is no natural business event to trigger the task
  • the schedule is the business requirement, not a workaround for missing event design

Common Combination Patterns

Some strong solutions combine patterns deliberately:

  • API handler + queue worker Use when the user needs acknowledgement now but the work is slower than the request budget.

  • Object storage + object-trigger processing + workflow Use for uploads, media pipelines, and document-processing systems that may involve review or multi-step extraction.

  • Event bus + projection consumers + workflow engine Use when facts must update several downstream read models and also start coordinated business processes.

  • Scheduled automation + queue worker Use when a periodic job should discover work but not process everything in one invocation.

Pattern Misfires to Catch Early

Choose something else, or combine patterns differently, when you see these signs:

  • an API handler is being stretched to include several slow remote calls the user does not need to wait on
  • an event processor is being used for work that actually requires clear multi-step state coordination
  • a workflow engine is introduced for simple CRUD or one-step tasks
  • a schedule is being used to poll for work that already has a real event source
  • object-trigger processing is expected to replace a real metadata or transactional store

A Fast Decision Checklist

Before choosing the pattern, ask:

  • What starts the work?
  • Does the initiator need the final answer immediately?
  • Where should durable progress live?
  • What happens if the same work is delivered twice?
  • Which dependency or capacity limit is most likely to fail first?
  • Can the team observe, test, and operate this pattern confidently?

If those answers are still vague, the pattern choice is probably premature. Tighten the problem statement first, then return to the matrix.

Revised on Thursday, April 23, 2026