Describe the common role of object storage for uploads, static content, artifacts, snapshots, archival data, and event triggers. Explain where this pattern is especially powerful.
Object storage patterns are foundational in serverless systems because they let the platform handle large, durable blobs while functions stay small, short-lived, and event-driven. Instead of streaming every upload through application memory or treating a database as a file system, a serverless design usually places binary assets, exports, snapshots, and static artifacts into object storage and uses metadata plus events to drive the rest of the workflow.
That pattern is powerful because object storage gives durable capacity, cheap archival behavior, URL-addressable objects, and event triggers without requiring long-running infrastructure. The trade-off is that object storage is not a transactional query engine. It is good at storing and retrieving objects by key, not at enforcing rich relational integrity or ad hoc filtering.
flowchart LR
A["Client"] --> B["Signed upload request"]
B --> C["Upload function"]
C --> D["Object storage"]
D --> E["Object created event"]
E --> F["Processing function"]
F --> G["Metadata store"]
F --> H["Derived artifact or thumbnail"]
What to notice:
Object storage is usually the right default for:
In all of these cases, the pattern works because the data is naturally blob-shaped. The consumer wants to fetch or process a named object, not join across rows or run complex transactions.
A common serverless mistake is to send every upload through the function itself. That turns the function into an unnecessary proxy, increases latency, and wastes concurrency on byte shuffling. A better design often lets the client upload directly to object storage through a time-limited signed request, while the function handles authorization, metadata creation, and post-upload processing.
1upload_flow:
2 request:
3 route: POST /uploads
4 function: create-upload-session
5 storage:
6 bucket: product-images
7 object_prefix: uploads/
8 events:
9 - type: object.created
10 target: generate-thumbnail
11 - type: object.deleted
12 target: cleanup-metadata
1type UploadSession = {
2 uploadUrl: string;
3 objectKey: string;
4};
5
6export async function createUploadSession(userId: string): Promise<UploadSession> {
7 const objectKey = `uploads/${userId}/${crypto.randomUUID()}.jpg`;
8
9 await metadataStore.insert({
10 objectKey,
11 ownerId: userId,
12 status: "pending-upload",
13 });
14
15 const uploadUrl = await objectStore.createSignedUploadUrl(objectKey, {
16 expiresInSeconds: 900,
17 contentType: "image/jpeg",
18 });
19
20 return { uploadUrl, objectKey };
21}
What this demonstrates:
Object storage rarely stands alone. Most real systems combine it with a metadata store that tracks:
This matters because object keys are not a substitute for a real query model. If the application needs to ask “show me all pending invoices for customer 42” or “which uploads failed malware scanning,” that answer should come from a database or index, not from parsing filenames.
Object storage is strongest when:
It becomes a bad fit when a team tries to use it as:
Storage-triggered architectures also need care around event timing. Object-created events may arrive more than once, later than expected, or alongside partial downstream failure. That means processors should be idempotent and status should be tracked durably.
A team stores user-generated reports in object storage and triggers a function to index them. They now want the application to list all failed reports by tenant and retry only the failed ones. What is missing from the design?
The stronger answer is a durable metadata layer, not “use object storage harder.” The blob store is still correct for the files themselves, but retryable workflow state and queryable status need a structured store that tracks tenant, state, timestamps, and failure reason.