Browse Serverless Patterns and Anti-Patterns

Glossary of Serverless Terms

Key terms for serverless compute, events, workflows, scaling, and operations.

This glossary keeps the guide’s vocabulary stable across architecture, security, delivery, and operations discussions. Use it when a term sounds familiar but overloaded, or when two related ideas such as trigger and event source, or retry and replay, need to be separated cleanly before making a design decision.

    flowchart LR
	    A["Client or system event"] --> B["Trigger or event source"]
	    B --> C["Function invocation"]
	    C --> D["Execution identity"]
	    C --> E["State or storage"]
	    C --> F["Queue, stream, or workflow"]
	    F --> G["Retry, DLQ, replay, or compensation"]

What to notice:

  • serverless terms are connected, not isolated
  • one invocation sits at the center of identity, state, and event flow
  • many confusing terms become clearer when placed in the full execution path

Compute and Invocation Terms

Function
The deployable unit of serverless compute that runs in response to a trigger or event. A function should usually have one bounded responsibility and one clear permission shape.

Handler
The entry point inside the function code that receives the event or request and begins execution. One deployment unit may expose one handler, but the handler is not the whole architecture.

Invocation
One execution of a function for one request, batch, or event delivery. Billing, latency, retries, and concurrency are often discussed at the invocation level.

Runtime
The language and execution environment used to run the function. Runtime choice influences cold starts, dependency loading, debugging tools, and sometimes portability.

Cold Start
The startup delay that occurs when the platform must prepare a fresh execution environment before running the handler. Cold starts are affected by runtime, package size, initialization work, and traffic shape.

Warm Reuse
The reuse of an existing execution environment for later invocations. Warm reuse can improve latency, but it is an optimization behavior, not a durability guarantee.

Trigger and Event Terms

Trigger
The mechanism that causes a function to run. Common triggers include HTTP requests, queue messages, object-storage events, schedules, and workflow transitions.

Event Source
The system that produces the input which activates the function. The trigger is the activation mechanism; the event source is the origin of the data or signal.

Event Source Mapping
The configuration that connects an event source such as a queue or stream to a function and defines how records are delivered, batched, retried, or acknowledged.

Event
A record that something happened. In serverless and event-driven systems, an event is usually a fact, not a command.

Queue
A work-distribution mechanism where messages are processed by one consumer path at a time. Queues are strong for background jobs and burst smoothing.

Topic or Event Bus
A fact-distribution mechanism where one published event may be routed to several consumers. Topics and event buses are stronger than queues when one event should drive several downstream reactions.

Stream
An ordered sequence of events or records, often partitioned and replayable. Streams are useful when order, replay, or progressive consumption matter.

Scaling and Performance Terms

Concurrency
The number of function invocations running at the same time. Concurrency is both a performance capacity and a risk boundary because it affects downstream pressure.

Reserved Concurrency
A guaranteed or isolated share of execution capacity set aside for a specific function or path. It helps protect critical workloads from noisy neighbors.

Throttling
The refusal or delaying of new work because a configured or platform limit has been reached. Throttling can be a failure symptom or an intentional safety control.

Latency
The time taken to complete a request, event-processing step, or dependency call. In serverless systems, latency often reflects remote-call shape as much as raw compute speed.

Batch Size
The number of records delivered to one invocation from a queue or stream. Larger batches can improve efficiency, but they also change error handling and dependency pressure.

Reliability and Workflow Terms

Retry
A repeated attempt to execute the same logical operation after a failure or timeout. Retries are normal in serverless systems and must be expected in the design.

Backoff
The delay between retries, usually increasing after repeated failure. Backoff reduces retry storms and gives dependencies time to recover.

Idempotency
The property that reprocessing the same logical operation does not create a new unintended business effect. Idempotency is one of the core reliability requirements in serverless design.

DLQ (Dead-Letter Queue)
A quarantine location for work that failed too many times or should stop being retried automatically. A DLQ is useful only when paired with inspection, ownership, and replay policy.

Replay
The act of processing a past event or quarantined message again. Replay is operationally powerful but can be dangerous without idempotency and clear safety rules.

Workflow Engine
A managed or explicit orchestration layer that coordinates branching, waits, retries, and long-running process state across several steps.

Orchestration
Central coordination of multi-step work through an explicit control flow. In serverless systems, orchestration is often preferable to hidden coordination scattered across handlers.

Saga
A multi-step distributed process where each step commits locally and failures are handled through compensating actions rather than one global transaction.

Compensation
A recovery action that attempts to undo or offset an earlier successful step after a later step fails. Compensation is not the same as a perfect rollback.

State and Storage Terms

Stateless Compute
The design principle that function correctness should not depend on local memory or local disk surviving across invocations. Local state may help optimization, but durable truth must live elsewhere.

State Externalization
The practice of moving workflow progress, checkpoints, deduplication markers, or other correctness-critical state into durable systems rather than keeping it inside the runtime.

Object Storage
Durable storage for files, artifacts, snapshots, and large payloads addressed by key. It is strong for blob-shaped data, not for rich transactional queries.

Transactional Store
A database optimized for correctness-critical reads and writes with stronger consistency and update rules. This may be relational, key-value, or document-oriented depending on the workload.

Materialized View
A read-optimized representation of data shaped for one query or product view. A materialized view is different from a cache because it is an intentional alternate read model.

Cache
An optimization layer that stores frequently reused data closer to the read path. A cache improves speed, but it should not quietly become the only source of correctness-critical truth.

Identity and Security Terms

Execution Identity
The security principal a function uses when it calls other services. It determines what the function is allowed to read, write, invoke, or publish.

Least Privilege
The principle of granting only the permissions a function or workflow actually needs. In serverless systems, broad permissions create large automated blast radius.

Secret
A sensitive value such as an API key, signing secret, token, or credential that must be protected, rotated, and accessed in a controlled way.

Tenant Context
The identity and scope that connect a request, event, or resource to one tenant in a multi-tenant system. Tenant context must be carried and revalidated across hops.

Operational and Delivery Terms

Structured Telemetry
Logs and signals emitted in a machine-readable form with stable fields such as request ID, correlation ID, tenant, workflow step, and outcome.

Correlation ID
A stable identifier used to connect related work across several functions, services, or asynchronous hops.

Trace
A linked record of one request or event path through several components. Traces are especially useful when the system spans functions, queues, workflows, and dependencies.

Infrastructure as Code (IaC)
The practice of defining deployable infrastructure and platform behavior in versioned configuration rather than manual console changes. In serverless systems, this includes triggers, permissions, timeouts, and service bindings.

Canary Release
A progressive rollout where only a small percentage of traffic or events are exposed to a new version at first. Canarying limits blast radius while health is evaluated.

Feature Flag
A runtime switch that controls whether a feature or behavior is active for some or all users or tenants. Flags help separate deployment from activation, but they need ownership and cleanup.

Common Distinctions

Function vs Handler
A function is the deployable compute unit. A handler is the code entry point inside it.

Trigger vs Event Source
A trigger activates the function. The event source is the system that produced the event or message being delivered.

Queue vs Workflow Engine
A queue distributes background work. A workflow engine coordinates multi-step process state, waits, branching, and recovery logic.

Retry vs Replay
A retry is an immediate or policy-driven repeated attempt after failure. A replay is a deliberate reprocessing of past or quarantined work, usually for recovery or rebuilding.

Cache vs Materialized View
A cache keeps recent or reusable data near the read path. A materialized view is an intentional alternate read model designed for a specific query shape.

Revised on Thursday, April 23, 2026