Learn how to turn threat modeling into a practical Clojure security workflow that drives tests, dependency review, abuse-case checks, and design updates over time.
Threat modeling: A structured review of assets, data flows, trust boundaries, attacker goals, and mitigations before the failure shows up in production.
Threat modeling is not a replacement for security testing, and security testing is not a substitute for design review. In a strong engineering process, threat modeling tells you what kinds of failure matter, and testing tells you whether the controls actually hold.
For Clojure teams, this fits naturally with explicit data flow and small composable boundaries. The system is often already expressed as request maps, messages, handlers, and transformation steps. That makes it easier to ask the right security questions:
Do not begin with a list of trendy attacks. Begin with:
Then map how data and commands move through the system. A plain request handler, background consumer, or queue-driven worker can all become attack surfaces when the boundary assumptions are vague.
The diagram below shows the basic loop. The important thing to notice is that testing and implementation feed back into the model instead of living in a separate security document.
flowchart TD
Change["New feature or architecture change"] --> Model["Model assets, data flows, and trust boundaries"]
Model --> Threats["Enumerate threats with STRIDE or a similar lens"]
Threats --> Controls["Choose mitigations and the tests that should prove them"]
Controls --> Build["Implement code, configuration, monitoring, and reviews"]
Build --> Verify["Run scans, targeted tests, and attack-focused review"]
Verify --> Update["Update the model with new findings and residual risk"]
Update --> Model
STRIDE remains useful because it forces breadth:
You do not need a ceremony-heavy template to benefit from this. A review doc or threat-model board that walks a real request or workflow through those questions is often enough.
Once the model identifies meaningful risks, testing becomes sharper. For example:
This is much stronger than generic “run the scanner and hope.”
Useful layers include:
OWASP Threat Dragon is a practical tool for modeling, but the tool is secondary. The main gain comes from disciplined reasoning about how a real attacker would cross the boundaries your system depends on.
Security tests do not always need exotic tooling. A plain clojure.test assertion around an abuse case can catch a real authorization gap:
1(ns myapp.project-access-test
2 (:require [clojure.test :refer [deftest is]]))
3
4(deftest tenant-user-cannot-read-foreign-project
5 (let [request {:identity {:user/id 7
6 :tenant/id :tenant-a
7 :roles #{:user}}}
8 project {:project/id 42
9 :tenant/id :tenant-b
10 :project/member-ids #{9 10}}]
11 (is (= {:status 403
12 :body {:error :forbidden}}
13 (show-project request project)))))
This kind of test becomes much more valuable when it is derived from an explicit threat such as cross-tenant data exposure.
Threat models go stale quickly when teams add:
The mistake is treating the threat model as an annual compliance artifact instead of a living design review aid.
If the model does not influence code review, tests, or monitoring, it is probably paperwork.
Scanners help, but they do not understand your tenant boundaries, privilege rules, or business abuse cases.
Threat modeling should focus on misuse, not just on the intended workflow.
The strongest results come when threat review happens during design and evolves with implementation.
Start with assets and boundaries, use a simple prompt set such as STRIDE, and turn each meaningful threat into one or more specific controls and tests. Then update the model when the architecture changes or when an incident teaches you something the design review missed. In Clojure, the win is not a giant security framework. It is explicit reasoning about who controls the data, where trust changes, and how to prove the critical controls still hold.