Blocking Operations in Asynchronous Code

Why blocking I/O inside go blocks and other async paths quietly destroys throughput in Clojure systems.

Blocking operations in asynchronous code are a Clojure anti-pattern because they smuggle thread-holding work into code that is supposed to stay responsive. The most common version is putting file I/O, HTTP calls, database queries, or Thread/sleep inside core.async go blocks and expecting the async design to save you anyway.

It usually does not. The result is reduced throughput, scheduler starvation, and systems that look asynchronous on paper but behave like a pile of stalled worker threads in production.

The First Distinction: Parking Is Not Blocking

In core.async, go blocks are designed for parking operations such as <! and >!, not for arbitrary blocking work.

That distinction is essential:

  • a parked go block yields control efficiently
  • a blocking call ties up the underlying thread

Weak:

1(go
2  (Thread/sleep 1000)
3  (>! out :done))

This looks harmless, but it blocks a thread in the pool used to run go work.

Better:

1(thread
2  (Thread/sleep 1000)
3  (>!! out :done))

Or better still, avoid Thread/sleep as a coordination strategy unless the design truly calls for it.

The anti-pattern is forgetting that go is not a general-purpose “make this asynchronous” wrapper.

Anti-Pattern: Blocking I/O Inside go

This is the most common failure mode:

1(go
2  (let [body (slurp "/tmp/orders.edn")]
3    (>! out body)))

or:

1(go
2  (let [response @(http/get "https://api.example.com/orders")]
3    (>! out response)))

These examples look compact, but the actual work is blocking. The go block does not magically turn it into non-blocking I/O.

The right alternatives are usually:

  • use a genuinely non-blocking client
  • run blocking work in thread, future, or a dedicated executor
  • keep go for channel coordination rather than the blocking work itself

Anti-Pattern: Mixing Async Coordination with Thread-Holding Work

A healthy async architecture separates:

  • coordination
  • transport or I/O
  • CPU-bound transformation

When those all collapse into the same async wrapper, it becomes hard to reason about which parts are cheap and which parts consume scarce resources.

For example:

1(go
2  (let [raw   (slurp path)
3        data  (parse-edn raw)
4        saved (jdbc/execute! db ...)]
5    (>! out saved)))

Now the go block hides:

  • file I/O
  • parsing
  • database I/O
  • channel delivery

That is too much responsibility for one async surface. If something stalls, the architecture gives you no clean boundary for reasoning about why.

Anti-Pattern: Assuming “Async” Automatically Means Scalable

A system can use channels, callbacks, or promises and still choke because the expensive part is blocking somewhere behind the scenes.

Warning signs:

  • channel-based code with low throughput under load
  • backlogs that grow even though producers and consumers look balanced
  • timeouts appearing far away from the real I/O bottleneck
  • thread pools saturating while go usage looks “correct”

The anti-pattern is mistaking async structure for actual non-blocking execution.

Anti-Pattern: Backpressure Without Capacity Planning

Blocking work inside async code is especially harmful when queueing already exists. The system now pays twice:

  • queued work waits longer
  • the threads that should drain the queue are themselves stuck on blocking tasks

This is why channel pipelines and worker pools need explicit boundaries:

  • which stage is blocking?
  • which stage is CPU-bound?
  • how much parallelism is safe?
  • where does backpressure appear first?

Without those answers, async code can become a slower, harder-to-debug version of synchronous code.

Anti-Pattern: Using go for Anything “That Feels Concurrent”

Sometimes go is used because it feels like the idiomatic concurrency wrapper, even when the job is just a background blocking task. That leads to code that is superficially idiomatic and operationally wrong.

The better mental model is:

  • use go for channel-oriented workflows built around parking operations
  • use thread, dedicated executors, or other thread-based tools for blocking work
  • use truly non-blocking libraries where the ecosystem supports them

A Better Async Boundary Model

    flowchart TD
	    A["Incoming work"] --> B["Async coordination / channel logic"]
	    B --> C{"Blocking I/O needed?"}
	    C -->|Yes| D["Dedicated thread or non-blocking client"]
	    C -->|No| E["Stay in go-based coordination path"]
	    D --> F["Return result to async pipeline"]
	    E --> F

This makes the resource model explicit instead of pretending every async-looking block has the same execution semantics.

What to Do Instead

  • keep go blocks focused on parking channel operations
  • move blocking calls to thread, future, or explicit executors
  • prefer non-blocking clients when the library support is mature and the cost is justified
  • design queue and channel boundaries with backpressure in mind
  • profile throughput and thread usage instead of assuming the async structure is healthy

Common Mistakes

  • calling blocking I/O directly from a go block
  • wrapping legacy blocking code in async syntax and assuming the problem is solved
  • mixing file, network, database, and coordination logic in one async block
  • using Thread/sleep in coordination paths that should stay responsive
  • ignoring thread-pool pressure because the code “looks asynchronous”

Key Takeaways

  • go is for parking operations, not arbitrary blocking work.
  • Blocking I/O inside async coordination paths quietly destroys throughput.
  • Async code needs explicit execution-boundary design, not just async-looking syntax.
  • Backpressure and thread usage matter as much as correctness.
  • If the work blocks, treat it honestly as blocking and route it accordingly.

Ready to Test Your Knowledge?

Loading quiz…
Revised on Thursday, April 23, 2026