Events, Triggers, and Reactive Entry Points

Describe the ways functions are triggered: HTTP, queues, topics, streams, schedules, object changes, authentication events, and platform hooks. This section should show that serverless is naturally event-driven.

Triggers are the entry points that make serverless systems reactive. A function usually does not sit in a loop waiting for work. Instead, something causes execution: an HTTP request arrives, a queue receives a message, an object is uploaded, a schedule fires, a stream emits records, or an identity or platform event occurs. That makes the trigger model central to serverless design. The trigger decides not only when code runs, but also what kind of latency, retry, ordering, and failure semantics the system inherits.

Teams often think of triggers as wiring. In practice, trigger choice is architecture. An HTTP trigger implies request-response pressure and user-facing latency. A queue trigger implies asynchronous buffering and retry behavior. A stream trigger implies ordering and batch-handling questions. An object-change trigger implies eventual rather than immediate reaction. If you choose the wrong trigger, the handler code has to fight the model.

    flowchart TD
	    A["Trigger type"] --> B["HTTP request"]
	    A --> C["Queue or topic"]
	    A --> D["Stream record"]
	    A --> E["Schedule"]
	    A --> F["Object or platform event"]
	    B --> G["Low-latency request path"]
	    C --> H["Buffered async work"]
	    D --> I["Ordered or batched consumption"]
	    E --> J["Time-based automation"]
	    F --> K["Reactive workflow step"]

What to notice:

  • every trigger shape implies a different operational contract
  • the same function code can behave very differently depending on the entry point
  • serverless feels event-driven because so many useful triggers are event-shaped

HTTP and API Triggers

HTTP triggers are the most familiar starting point. A client request arrives through an API gateway or managed HTTP entry point, and the platform routes it to a handler. This is attractive for lightweight APIs because the developer can focus on request validation, authorization, business logic, and response formatting without operating a long-running web server.

But HTTP triggers preserve synchronous pressure. The user or caller is waiting. That means cold starts, slow dependency calls, and large handler responsibilities are immediately visible. Good serverless API design often depends on pushing non-essential work out of the request path into asynchronous triggers.

Queues, Topics, and Pub/Sub Triggers

Queue and pub/sub triggers are where serverless becomes more naturally event-driven. A queue is useful when one work item should be processed by one worker instance. A topic or broadcast channel is useful when one fact should fan out to several consumers. In both cases, the handler is no longer on a user-facing synchronous path, which gives the system more freedom to retry, smooth bursts, and decouple producers from consumers.

This choice matters because queues and topics solve different problems:

  • queues distribute work
  • pub/sub distributes facts

Calling both “messaging” is technically true but architecturally weak.

Stream Triggers

Streams are different again. They often deliver ordered records, partitions, or batches that carry sequencing implications. A stream-triggered function may need to think about:

  • checkpoint progression
  • duplicate-safe handling
  • per-partition throughput
  • replay behavior

This is why stream triggers often require more operational discipline than simple queue triggers.

Scheduled Triggers

Schedules are straightforward but important. They let teams run maintenance, compliance checks, recurring reports, cleanup jobs, and health probes without keeping a long-running cron server alive. The common mistake is to treat scheduled work as harmless just because the trigger is simple. In reality, scheduled jobs still need idempotency, observability, and concurrency awareness, especially if a run overlaps with the next one.

Object Changes, Auth Events, and Platform Hooks

Serverless systems often react to changes in managed services:

  • an object is uploaded
  • an identity event occurs
  • a workflow reaches a callback point
  • a platform lifecycle hook fires

These triggers are powerful because they let systems respond directly to managed-service events. They are also a common source of hidden coupling if teams do not document ownership and failure behavior clearly.

Trigger Choice Shapes the Handler

A strong handler design begins by respecting trigger semantics. Compare these two simplified examples.

1trigger:
2  type: http
3  route: POST /reports
4expectation:
5  latency: low
6  response: immediate
1trigger:
2  type: queue
3  source: report-jobs
4expectation:
5  latency: eventual
6  response: ack and process later

Both may generate reports. But the first must optimize for request latency and direct caller experience. The second can optimize for retry safety, throughput smoothing, and asynchronous completion. Treating them as interchangeable would be a design mistake.

A Simple Routing Example

This vendor-neutral pseudo-configuration shows how several triggers can feed different serverless entry points.

 1routes:
 2  - type: http
 3    path: POST /orders
 4    function: create-order
 5
 6  - type: queue
 7    source: order-jobs
 8    function: fulfill-order
 9
10  - type: schedule
11    cron: "0 */6 * * *"
12    function: reconcile-orders
13
14  - type: object_created
15    bucket: order-documents
16    function: index-document

This is a better mental model than “the app has functions.” The system has several reactive entry points, and each has a different contract.

Common Mistakes

  • using an HTTP trigger for work that should clearly be asynchronous
  • using a queue where several consumers need the same fact
  • ignoring ordering and replay implications on stream triggers
  • assuming object or platform events are automatically safe and duplicate-free
  • treating trigger selection as implementation detail instead of architectural choice

Design Review Question

A team accepts document uploads over HTTP and performs virus scanning, metadata extraction, thumbnail generation, and notification delivery in the same request because “the platform can scale the function.” What is the first thing to challenge?

Challenge the trigger choice and request boundary. The stronger answer is that the upload request should probably accept and persist the document quickly, then hand longer or failure-prone work to asynchronous triggers. The problem is not simply that the function may get large. The problem is that the HTTP entry point is carrying work with the wrong latency and retry shape.

Check Your Understanding

Loading quiz…
Revised on Thursday, April 23, 2026