Decision matrix for choosing serverless patterns by workload, latency, and operational risk.
Use this appendix when the workload is clear but the right serverless pattern is still uncertain. The goal is not to force one answer for every problem. It is to help narrow the design choice quickly by asking what kind of work is being done, whether the caller must wait, how stateful the process is, and what failure behavior has to be tolerated.
flowchart TD
A["Start with the problem"] --> B{"Does the caller need an immediate response?"}
B -->|Yes| C["API handler or request path function"]
B -->|No| D{"Is the work triggered by an event or artifact?"}
D -->|Event| E["Event processor or queue consumer"]
D -->|Artifact| F["Object-triggered processing"]
E --> G{"Many parallel items?"}
G -->|Yes| H["Fan-out / fan-in workflow"]
G -->|No| I{"Several coordinated steps?"}
I -->|Yes| J["Workflow orchestration"]
I -->|No| K["Single event consumer"]
What to notice:
Read the problem statement first, not the technology menu. Then look across four questions:
The matrix below is meant to guide the first architecture decision, not to replace deeper review.
| Problem shape | Strong default pattern | Why it fits | Watch out for |
|---|---|---|---|
| User needs a fast synchronous response | API handler | Keeps the request path direct and observable | Letting slow follow-up work stay on the synchronous path |
| User action creates slow background work | API handler plus queue-backed worker | Accept now, process later, and smooth bursts | Missing idempotency or weak status tracking |
| One business fact should notify many downstream consumers | Event processor via topic or event bus | Supports fan-out and looser coupling | Weak schema governance or unclear ownership |
| A file upload should trigger downstream processing | Object-trigger processing | Object storage fits artifacts and can trigger async work naturally | Putting query-critical business state only in object names |
| A workflow needs waits, branching, or human review | Workflow orchestration | Makes long-running state and control flow explicit | Hiding too much business logic inside opaque workflow definitions |
| Many independent items can be processed in parallel and later combined | Fan-out / fan-in workflow | Improves throughput while keeping aggregation explicit | Unbounded parallelism or unclear completion rules |
| Work should happen on a fixed schedule | Scheduled automation | Strong for housekeeping, polling replacement, or periodic projection updates | Using schedules for work that should be event-driven |
| Several local transactions may succeed before one later step fails | Workflow plus compensation pattern | Models distributed recovery explicitly | Pretending compensation is the same as rollback |
| Repeated bursty jobs need buffering | Queue-triggered event processor | Smooths producer spikes and gives retry control | Overloading downstream dependencies with too much consumer concurrency |
| Many consumers need the same filtered event stream | Event routing plus bounded consumers | Supports decoupled reaction paths | Shared event contracts with no versioning or trace context |
Some strong solutions combine patterns deliberately:
API handler + queue worker
Use when the user needs acknowledgement now but the work is slower than the request budget.
Object storage + object-trigger processing + workflow
Use for uploads, media pipelines, and document-processing systems that may involve review or multi-step extraction.
Event bus + projection consumers + workflow engine
Use when facts must update several downstream read models and also start coordinated business processes.
Scheduled automation + queue worker
Use when a periodic job should discover work but not process everything in one invocation.
Choose something else, or combine patterns differently, when you see these signs:
Before choosing the pattern, ask:
If those answers are still vague, the pattern choice is probably premature. Tighten the problem statement first, then return to the matrix.