core.async and Channels: Mastering Asynchronous Communication in Clojure

Learn how channels, go blocks, buffers, alts!, and blocking boundaries really work in core.async, including the practical difference between parking and blocking operations.

core.async: Clojure’s channel-oriented concurrency library. It models communication explicitly, so parts of a system hand values to each other through channels instead of sharing mutable state directly.

core.async is Clojure’s main tool for channel-based coordination. It is influenced by CSP, but the part that matters in everyday work is simpler: channels make handoff explicit, and go blocks let you write parking channel code in direct style.

That makes core.async a strong fit for pipelines, fan-in and fan-out flows, request brokers, timeouts, backpressure boundaries, and systems where explicit handoff is clearer than shared mutable state.

Channels, Parking, and Blocking

Channels are queues with coordination semantics. A put or take may proceed immediately, may park, or may block, depending on which API you use and whether a buffer is present.

The most important rule in core.async is this:

  • <! and >! are parking channel operations for use in go
  • <!! and >!! are blocking channel operations for ordinary threads
1(require '[clojure.core.async :as async
2           :refer [<! >! <!! >!! go chan]])
3
4(def ch (chan))
5
6(go
7  (>! ch {:event :started}))
8
9(println (<!! ch))

Parking and blocking are not the same thing. A go block is for channel parking operations, not arbitrary blocking I/O. Recent core.async releases can use virtual threads internally on Java 21+, but the authoring rule is unchanged: channel ops are fine in go, general blocking work still belongs elsewhere.

Buffers Change System Behavior

An unbuffered channel makes producer and consumer rendezvous directly. A buffered channel allows limited decoupling. Dropping and sliding buffers choose what to sacrifice when the consumer falls behind.

1(def jobs      (chan 32))
2(def telemetry (chan (async/dropping-buffer 100)))
3(def latest    (chan (async/sliding-buffer 1)))

Choosing a buffer is an architectural decision:

  • unbuffered channels maximize coordination
  • bounded buffers absorb short bursts
  • dropping buffers preserve throughput by shedding new work
  • sliding buffers preserve freshness by keeping the newest work

If you do not decide your overload policy explicitly, the system will decide it for you through latency, memory growth, or timeouts.

go, thread, and Newer io-thread Contexts

Use go for channel-oriented orchestration. Use a real thread context for blocking work.

1(def requests (chan))
2(def results  (chan))
3
4(go
5  (loop []
6    (when-some [req (<! requests)]
7      (async/thread
8        (>!! results (slurp (:url req))))
9      (recur))))

Newer core.async versions also expose io-thread for blocking I/O-oriented work. The design principle is the same either way: do not hide slow blocking work inside go code and expect the scheduler to save you.

Coordinating Multiple Inputs

alts! is the main tool when a consumer can accept input from several channels or a timeout.

1(go
2  (let [[value source]
3        (async/alts! [requests (async/timeout 250)])]
4    (cond
5      (= source requests) (println "Received" value)
6      :else               (println "Timed out waiting for work"))))

That pattern is the foundation for timeouts, cancellation races, fallback data sources, and priority boundaries.

Common Patterns

Three patterns show up constantly in real core.async systems:

Worker Pipelines

Use channels between stages when each stage has clear ownership and throughput can be controlled.

Timeout and Cancellation Boundaries

Use alts! or alts!! against a timeout channel so waiting does not become unbounded.

Backpressure by Design

Use bounded channels and explicit worker counts so overload is visible. Unbounded queues are not backpressure; they are deferred failure.

Common Mistakes

The most expensive core.async mistakes are predictable:

  • blocking I/O inside go
  • unbounded buffering by accident
  • no close protocol for channels
  • no timeout policy around external dependencies
  • using channels where a simple pure function or atom would be clearer

core.async is powerful, but it is not the default answer to all concurrency problems. Use it when explicit asynchronous handoff improves the design.

Practical Heuristics

If you can name the messages, the buffers, the timeout policy, and the channel ownership, core.async is probably a good fit. If everything feels like anonymous values bouncing between random go blocks, the design is still too implicit.

Ready to Test Your Knowledge?

Loading quiz…
Revised on Thursday, April 23, 2026