Skip to content

Chapter 6: Concurrency Patterns

Now that you understand goroutines and channels, let’s explore powerful patterns for building concurrent systems.

Concurrency patterns are reusable solutions to common problems in concurrent programming. They provide proven approaches for coordinating goroutines, managing resources, and building robust concurrent systems. These patterns emerged from real-world Go applications and represent best practices developed by the Go community.

This chapter covers essential concurrency patterns: worker pools for controlled parallelism, fan-out/fan-in for distributing and collecting work, pipelines for staged processing, and context for cancellation. Each pattern solves specific problems and has clear use cases. Understanding when and how to apply these patterns is key to writing effective concurrent Go code.

A worker pool limits concurrency by maintaining a fixed number of worker goroutines that process jobs from a shared queue. Instead of spawning a goroutine for every task (which could exhaust resources), you create a fixed pool and feed it work.

The pattern has three components:

  1. Jobs channel - tasks to be processed
  2. Workers - fixed number of goroutines consuming from the jobs channel
  3. Results channel (optional) - collected outputs from workers

Worker pools solve the problem of unbounded concurrency. Suppose you need to process 10,000 tasks. Spawning 10,000 goroutines might work, but it wastes resources and could overwhelm your system. A pool of 10 workers processing 10,000 tasks sequentially is more efficient and predictable.

Resource Control: Limit the number of concurrent operations to match system capacity. Processing 1,000 database queries? Use a pool sized to your database connection limit.

Predictable Performance: Fixed concurrency means predictable resource usage. No surprises when load spikes - the pool buffers work without creating unbounded goroutines.

Graceful Shutdown: Close the jobs channel, and workers will finish current work and exit. No need to track individual goroutines.

Backpressure: When workers are busy, new jobs wait in the channel. This naturally throttles producers and prevents overwhelming the system.

Distribute work across a fixed number of workers:

Fan-out and fan-in are complementary patterns for distributing and collecting work across multiple concurrent operations.

Fan-Out: Distribute work from a single source to multiple workers. Imagine a single channel feeding multiple goroutines - each goroutine receives different items from that channel. This parallelizes CPU-bound work or multiplies throughput for I/O-bound operations.

Fan-In: Collect results from multiple channels into a single channel. Multiple workers send their outputs to their own channels, and you merge those channels into one. This consolidates parallel work back into a sequential stream.

Together, fan-out/fan-in creates a classic parallel processing pattern: split work across workers (fan-out), process independently, then merge results (fan-in).

CPU-Bound Parallelism: Have an expensive computation? Fan out to multiple workers (typically one per CPU core) to utilize all available processing power.

I/O Throughput Multiplication: Making network requests? Fan out to many workers to make requests concurrently. Waiting time is parallelized.

Flexible Scaling: Easily adjust parallelism by changing the number of fan-out workers without modifying the rest of your code.

Load Balancing: When workers pull from a shared channel (fan-out), fast workers automatically get more work than slow workers. Natural load balancing.

Fan-out: Start multiple goroutines for the same task. Fan-in: Combine results from multiple channels into one.

The pipeline pattern chains multiple processing stages together, where each stage receives data from the previous stage, processes it, and sends it to the next stage. Each stage runs in its own goroutine, so all stages execute concurrently.

Think of an assembly line: each station performs one operation and passes the product to the next station. All stations work simultaneously on different items. In Go, each station is a function that reads from an input channel, processes data, and writes to an output channel.

Pipelines are composed of:

  1. Generator - creates data and sends it to the first stage
  2. Stages - transform data from input channel to output channel
  3. Consumer - receives final output

The power of pipelines is that each stage can work independently. While stage 3 processes item N, stage 2 processes item N+1, and stage 1 processes item N+2. This pipeline parallelism can significantly improve throughput.

Separation of Concerns: Each stage does one thing well. This makes code modular, testable, and easy to understand. Want to add a new transformation? Insert a new stage.

Concurrent Processing: All stages run simultaneously. While one stage is doing expensive work on item A, other stages process items B and C. Total time is limited by the slowest stage, not the sum of all stages.

Composability: Build complex processing by composing simple stages. Each stage is independently useful and can be combined in different ways.

Backpressure: Slow stages naturally throttle fast stages through channel blocking. If stage 3 is slow, its input channel fills up, blocking stage 2, which blocks stage 1. The system self-regulates without explicit throttling.

Chain stages together, each processing data and passing it along:

context.Context is Go’s standard way to pass cancellation signals, deadlines, and request-scoped values across API boundaries and between goroutines. It solves a critical problem: how do you tell a tree of goroutines to stop working?

Without context, cancelling concurrent operations is messy. You might use a done channel, but then you need to pass it everywhere. You might check a boolean flag, but that’s not goroutine-safe without locks. Context provides a standard, composable solution.

A Context forms a tree. When you cancel a parent context, all child contexts are automatically cancelled. This cascading cancellation is perfect for request-scoped operations: cancel the request, and all goroutines working on that request stop.

Graceful Cancellation: Stop long-running operations cleanly when they’re no longer needed. User closes their browser? Cancel the request context, and all backend work stops.

Timeout Control: Prevent operations from running forever. Wrap operations in context.WithTimeout to enforce deadlines. If work doesn’t finish in time, it’s automatically cancelled.

Request Propagation: In servers, contexts flow through the call stack carrying the request deadline and cancellation signal. Every function can check if the request was cancelled.

Resource Cleanup: When a context is cancelled, goroutines can clean up resources (close files, release locks, commit transactions) before exiting.

Use context.Context to cancel goroutines gracefully:

Pass request-scoped values through the call chain:

Limit concurrent operations using a buffered channel:

  1. Worker pools - fixed workers processing from a job queue
  2. Fan-out - multiple goroutines reading from one channel
  3. Fan-in - merge multiple channels into one
  4. Pipelines - chain stages of processing
  5. Context - carry cancellation signals and deadlines
  6. Semaphores - limit concurrent access with buffered channels

Concurrent Web Scraper

hard

Build a concurrent URL fetcher that processes URLs from a channel, respects a max concurrency limit, and can be cancelled via context.


Chapter in progress
0 / 14 chapters completed

Next up: Chapter 7: Package Design & Modules