Workbook-style prompts for reviewing service-boundary decisions and decomposition trade-offs.
This appendix is a study and review workbook for the guide. Use it after reading a chapter, before an architecture review, or when preparing to challenge a proposed service split. The point is not to memorize preferred wording. The point is to practice asking sharper questions about boundary quality, coupling, ownership, data, workflow, and operational cost.
One effective study loop is simple:
flowchart LR
A["Read one chapter"] --> B["Answer review question from memory"]
B --> C["Work through scenario prompt"]
C --> D["Check whether your answer used the chapter's core trade-offs"]
D --> E["Revisit the chapter if the reasoning was weak"]
If your answer is mostly opinion and not evidence, return to the chapter and identify which signals or trade-offs you missed.
Review question: What makes a boundary architectural rather than merely organizational or technical?
Scenario prompt: A team says it is moving to microservices mainly to use containers and separate repositories. What boundary-focused challenge would you raise first?
Answer lens: A strong answer should connect boundaries to ownership, data, change cadence, and workflow, not only packaging.
Review question: In what situations is a modular monolith stronger than a service-based architecture?
Scenario prompt: A small team wants five services immediately because it expects growth next year. How would you test whether that forecast justifies distribution today?
Answer lens: A strong answer should compare current ownership and operational readiness against speculative future need.
Review question: Why do business capabilities usually produce stronger service candidates than technical layers?
Scenario prompt: A design review is split between organizing services by backend layer versus by value stream and capability. Which questions help break the tie?
Answer lens: A strong answer should include business meaning, handoff points, and workflow ownership.
Review question: What is the difference between a bounded context and a deployable service?
Scenario prompt: Two teams both use the word account, but they mean different things operationally. What kind of architectural problem does that usually foreshadow?
Answer lens: A strong answer should focus on model meaning, translation, and accidental overlap.
Review question: Which heuristics are usually strongest for drawing a service boundary, and which are usually weak?
Scenario prompt: A team wants to split its system by CRUD entity because it feels objective. How would you challenge that proposal using heuristic quality rather than taste?
Answer lens: A strong answer should compare business capability, change cadence, and data ownership against table- or screen-driven reasoning.
Review question: Why is one service, one source of truth such a central decomposition principle?
Scenario prompt: A reporting team says direct reads into service databases are harmless because they are read-only. What is the deeper architectural risk?
Answer lens: A strong answer should mention hidden coupling, ownership erosion, and shadow sources of truth.
Review question: When is synchronous communication healthy, and when does it become a boundary smell?
Scenario prompt: A normal checkout request makes nine synchronous calls. The team wants to fix it only with caching. What is the stronger review concern?
Answer lens: A strong answer should ask whether the problem is really latency alone or whether the workflow and contracts are too fragmented.
Review question: What changes about correctness once a workflow crosses several service boundaries?
Scenario prompt: A team wants a fully atomic multi-service order flow but also wants independent services for every major step. What trade-off is it avoiding?
Answer lens: A strong answer should include distributed transaction cost, staged states, compensation, and whether the split is premature.
Review question: Why is one-team-one-service a useful bias but a weak rigid rule?
Scenario prompt: The same team owns three small services that always change together. Another reviewer says the architecture is wrong because the mapping is not one team to one deployable. How do you respond?
Answer lens: A strong answer should focus on clarity of primary ownership rather than arithmetic purity.
Review question: Why is there no universal ideal microservice size?
Scenario prompt: A service with strong cohesion but twelve endpoints is labeled “too big.” Which evidence should matter more than that raw count?
Answer lens: A strong answer should bring in cohesion, coupling, ownership, workflow fit, and operational cost.
Review question: What problem is each major pattern trying to solve: capability-oriented, experience-aligned, workflow-centered, and modular-monolith-first?
Scenario prompt: A team proposes one service per screen while calling it experience-aligned architecture. What is the strongest correction?
Answer lens: A strong answer should distinguish integration layers from true domain ownership.
Review question: How can you tell whether a system is a distributed monolith rather than just a busy distributed system?
Scenario prompt: A team improves tracing and deployment automation but still has lockstep releases and shared schema access. Has it solved the boundary problem?
Answer lens: A strong answer should separate tooling maturity from autonomy maturity.
Review question: What makes an extraction candidate strong early in a migration?
Scenario prompt: The team wants to extract the most critical and most transaction-heavy module first because success there would be “most impressive.” What sequencing argument would you make instead?
Answer lens: A strong answer should weigh learning value, rollback risk, observability, and transaction entanglement.
Review question: Why is a boundary operationally incomplete if it has no contract safety, trace continuity, or runbook discipline?
Scenario prompt: A service launches with good unit tests but no contract tests, no trace propagation, and only generic infrastructure alerts. What kind of confidence gap exists?
Answer lens: A strong answer should connect deployment risk, diagnosis risk, and operating readiness.
Review question: Why are service identity and least privilege part of boundary design rather than only part of platform security?
Scenario prompt: All internal services share one broad credential because setup is simpler. What problems does this create beyond secret rotation?
Answer lens: A strong answer should include blast radius, auditability, and the loss of meaningful authorization at the edge.
Review question: What makes a reference architecture useful instead of misleading?
Scenario prompt: A candidate boundary looks good on capability fit and scaling need but weak on data ownership, workflow independence, and operational readiness. Should the team still extract it immediately?
Answer lens: A strong answer should use the multi-factor review framework rather than a single favorite criterion.
Use these when you want a harder synthesis exercise.
Scenario prompt: A company of eight engineers has twenty services, weak observability, several shared schemas, and coordinated releases for most features. Is the main problem technology, organization, or boundary design?
Answer lens: A strong answer should show how all three interact, while still naming weak decomposition as the architectural center of the problem.
Scenario prompt: A multi-tenant SaaS platform wants to split a highly sensitive billing capability into several services while it still lacks strong service identity, tenant-aware observability, and clear operational ownership. What should happen first?
Answer lens: A strong answer should prioritize stronger control and readiness foundations before more boundary expansion.
Scenario prompt: A platform team says every problem in the architecture can be solved by more tooling. Which kinds of problems should you test to see whether the issue is actually boundary quality instead?
Answer lens: A strong answer should look for workflow coupling, ownership ambiguity, shared truths, and weak contract design rather than only missing infrastructure.
These prompts work best when the team has to answer in evidence-based language:
If a discussion cannot answer those questions cleanly, the boundary usually needs more work, even if the diagram still looks tidy.