A practical lesson on how service boundaries change test strategy, why one test layer is never enough, and how to balance unit, component, contract, integration, and end-to-end tests.
A boundary-aware testing strategy treats service boundaries as places where confidence must be assembled from several layers rather than from one giant test suite. In a monolith, many behaviors can be verified through local transactions and in-process tests. In a distributed system, that confidence is spread across local logic, published contracts, selected collaborations, and a few end-to-end journeys. Teams get into trouble when they try to force one layer to do the job of all the others.
That is why testing strategy belongs in architecture review. A boundary that cannot be tested proportionately is usually a boundary that will be changed fearfully or operated reactively.
flowchart TD
A["Unit tests"] --> E["Confidence stack"]
B["Component tests"] --> E
C["Contract tests"] --> E
D["Selective integration and E2E tests"] --> E
What to notice:
A practical distributed testing stack often looks like this:
The layers overlap slightly, but they do not have the same purpose. Trying to replace contract confidence with a huge end-to-end suite usually creates slow and brittle feedback. Trying to replace component tests with unit tests usually misses wiring, configuration, and boundary behavior.
Some teams talk about microservices as if local testing matters less once services are extracted. The opposite is usually true. Strong unit and component tests reduce the number of expensive distributed tests needed for safe iteration.
A component test often exercises the service with its local storage and service wiring, but with external collaborators stubbed or simulated:
1import { createPricingApp } from "./pricing-app";
2import request from "supertest";
3
4it("returns a quote with an expiry time", async () => {
5 const app = createPricingApp({
6 inventoryClient: fakeInventoryClient({ available: true }),
7 clock: fixedClock("2026-03-23T10:00:00Z"),
8 });
9
10 const response = await request(app)
11 .post("/price-quote")
12 .send({ sku: "sku-42", quantity: 2 });
13
14 expect(response.status).toBe(200);
15 expect(response.body).toMatchObject({
16 currency: "USD",
17 expiresAt: "2026-03-23T10:15:00Z",
18 });
19});
What this demonstrates:
Integration tests are strongest when they target:
They are weaker when teams try to model the whole production system in every pre-merge run. That creates slow pipelines and fragile environments without necessarily producing better diagnoses.
End-to-end tests are important for:
They are not a good replacement for local or contract-oriented confidence. A small number of important end-to-end checks is usually much healthier than a giant suite that fails often and explains little.
One useful way to keep the strategy disciplined is to document which risks belong to which layer:
1testing_strategy:
2 unit:
3 protects:
4 - pricing_rules
5 - discount_edge_cases
6 component:
7 protects:
8 - price_quote_api_behavior
9 - persistence_and_validation_wiring
10 contract:
11 protects:
12 - checkout_assumptions_about_pricing_response
13 end_to_end:
14 protects:
15 - checkout_quote_to_order_happy_path
What this demonstrates:
A team plans to rely mostly on end-to-end tests because “they exercise the real system.” It has very few component tests and no contract tests between provider and consumer services. What is the main weakness in that plan?
The main weakness is that the team is concentrating confidence in the slowest and coarsest layer. End-to-end tests can confirm that a major path still works, but they are poor at catching boundary assumptions early and poor at explaining which service changed unsafely. A stronger design would add component and contract confidence so the end-to-end suite can stay smaller and more purposeful.