Chapter 5: Sync Primitives
Sync Primitives
Section titled “Sync Primitives”While channels are Go’s primary concurrency tool, the sync package provides essential primitives for shared memory synchronization. Understanding when to use sync primitives versus channels is crucial for writing efficient concurrent Go code.
Go’s concurrency philosophy is “Don’t communicate by sharing memory; share memory by communicating.” Channels embody this principle perfectly. But sometimes you need to protect shared state that multiple goroutines access directly - that’s where sync primitives shine. They provide low-level synchronization mechanisms for coordinating access to shared memory.
The sync package includes several tools: WaitGroup for waiting on goroutine completion, Mutex for exclusive access, RWMutex for read-heavy scenarios, Once for one-time initialization, Pool for object reuse, and atomic operations for lock-free updates. Each has specific use cases where it outperforms channel-based solutions.
This chapter explores each sync primitive in depth, explaining not just how they work but when to use them. You’ll learn to recognize patterns where mutexes make more sense than channels, understand the performance implications of different synchronization strategies, and master the art of protecting shared state safely.
WaitGroup
Section titled “WaitGroup”Understanding WaitGroup
Section titled “Understanding WaitGroup”WaitGroup is one of the simplest yet most essential sync primitives. It solves a common problem: how do you wait for multiple goroutines to complete before proceeding? Without WaitGroup, you’d need channels or other coordination mechanisms, which adds complexity for this straightforward scenario.
A WaitGroup maintains an internal counter. You increment it with Add() before launching goroutines, each goroutine calls Done() when finished (decrementing the counter), and Wait() blocks until the counter reaches zero. This pattern is perfect for parallel work where you need to wait for all tasks to complete before continuing.
The most common mistake with WaitGroups is passing them by value instead of by pointer. WaitGroups contain internal state that must be shared - copying a WaitGroup creates a separate counter, breaking the synchronization. Always use *sync.WaitGroup.
WaitGroup waits for a collection of goroutines to finish:
Understanding Mutex
Section titled “Understanding Mutex”Mutex (mutual exclusion) is the fundamental building block for protecting shared data from concurrent access. When multiple goroutines need to read and modify the same variable, you have a race condition - simultaneous access can corrupt data. A mutex ensures only one goroutine accesses the protected data at a time.
Think of a mutex as a lock on a resource. When a goroutine calls Lock(), it acquires exclusive access. If another goroutine tries to lock the same mutex, it blocks (waits) until the first goroutine calls Unlock(). This serializes access, preventing race conditions but at the cost of concurrency - only one goroutine proceeds at a time.
The critical section (code between Lock and Unlock) should be as small as possible. Long critical sections reduce concurrency and can cause bottlenecks. Use defer mu.Unlock() immediately after locking to ensure the mutex is always released, even if the function panics. This pattern is so common it’s nearly universal in Go code.
Mutex provides mutual exclusion for protecting shared data:
RWMutex
Section titled “RWMutex”Why RWMutex Matters
Section titled “Why RWMutex Matters”RWMutex (read-write mutex) is an optimization for read-heavy workloads. A regular Mutex treats all access equally - whether you’re reading or writing, only one goroutine proceeds. But reads are inherently safe to do concurrently - multiple goroutines can read the same data simultaneously without corruption. Only writes need exclusivity.
RWMutex exploits this by offering two types of locks: RLock (read lock) and Lock (write lock). Multiple goroutines can hold read locks simultaneously, enabling concurrent reads. But when a goroutine needs a write lock, it waits for all readers to finish, acquires exclusive access, and blocks new readers until writing completes.
This pattern shines for caches, configuration stores, and any scenario with frequent reads and rare writes. The performance gain can be substantial - instead of serializing all access, you serialize only writes while allowing reads to scale across CPU cores. However, if writes are frequent, RWMutex overhead can exceed a regular Mutex. Profile before optimizing.
RWMutex allows multiple readers or a single writer:
sync.Once
Section titled “sync.Once”Understanding sync.Once
Section titled “Understanding sync.Once”sync.Once solves the initialization problem: you need to run some setup code exactly once, but many goroutines might trigger it concurrently. The naive approach (checking a boolean flag) has race conditions. sync.Once handles this correctly and efficiently.
Once.Do() takes a function and guarantees it executes exactly once, even if called from multiple goroutines simultaneously. The first caller runs the function while others wait. After completion, all future calls return immediately without blocking. This makes it perfect for lazy initialization of singletons, expensive setup operations, and one-time configuration.
A common pattern pairs sync.Once with package-level variables for singleton resources like database connections or configuration. This ensures the resource is initialized on first use (lazy) but only once, no matter how many goroutines try to access it concurrently. The alternative - initializing in init() - happens at program start whether you need it or not.
Execute code exactly once, even with concurrent calls:
sync.Pool
Section titled “sync.Pool”Understanding sync.Pool
Section titled “Understanding sync.Pool”sync.Pool is a specialized tool for reducing garbage collection pressure by reusing temporary objects. When you allocate many short-lived objects (like buffers, temporary structs), the garbage collector works overtime. A pool lets you recycle these objects instead of allocating new ones each time.
Pool.Get() retrieves an object from the pool (or creates a new one via the New function if the pool is empty). After using it, Pool.Put() returns it for reuse. The pool is safe for concurrent access and automatically clears itself during garbage collection - objects in the pool are not guaranteed to persist.
Use pools for frequently allocated temporary objects where the allocation cost is measurable. HTTP server buffers, encoders/decoders, and temporary work slices are common candidates. Don’t use pools for objects that need to persist or have complex state - pools are for temporary, interchangeable objects only. Always reset object state before putting it back.
Pool provides a set of temporary objects for reuse:
Atomic Operations
Section titled “Atomic Operations”Understanding Atomic Operations
Section titled “Understanding Atomic Operations”Atomic operations provide lock-free synchronization for simple values. When you just need to increment a counter, set a flag, or swap a value, using a mutex is overkill. Atomic operations perform these updates in a single indivisible CPU instruction, guaranteeing no other goroutine sees a partial state.
The sync/atomic package (and Go 1.19+ atomic types) supports integers, booleans, and pointers. Operations include Add, Load, Store, Swap, and CompareAndSwap. These are significantly faster than mutexes because they avoid the overhead of blocking, context switching, and OS-level synchronization. For simple counters in hot paths, atomics can be 10x faster than mutexes.
The key insight: atomics work for single-value operations only. If you need to update multiple related values together (like a counter and a timestamp), you need a mutex to ensure they change atomically as a unit. Atomics are perfect for simple shared state like request counters, flags, and statistics. For anything more complex, reach for mutexes or channels.
For simple counters and flags, use atomic operations instead of mutexes:
sync.Cond
Section titled “sync.Cond”Understanding sync.Cond
Section titled “Understanding sync.Cond”sync.Cond (condition variable) is the most specialized and least commonly used sync primitive. It coordinates goroutines waiting for a specific condition to become true. Unlike channels which pass data, Cond signals state changes - “something changed, check your condition again.”
The pattern: goroutines call Cond.Wait() to sleep until signaled. Wait() releases the associated mutex, sleeps, and re-acquires the mutex when woken. Other goroutines change state and call Cond.Signal() (wake one waiter) or Cond.Broadcast() (wake all waiters). Waiters then check if their condition is satisfied.
Cond is rarely needed because channels usually handle coordination better. Use Cond when many goroutines wait on the same condition and you want efficient broadcast semantics. The classic example is a work queue where workers sleep when empty and wake when items arrive. In modern Go code, channels or context cancellation often provide cleaner solutions.
Cond implements a condition variable for coordinating goroutines:
Mutex vs Channel: When to Use Which
Section titled “Mutex vs Channel: When to Use Which”Making the Right Choice
Section titled “Making the Right Choice”Choosing between mutexes and channels is a common design decision in Go. Both provide synchronization, but they solve different problems. The decision comes down to whether you’re protecting shared state or coordinating independent goroutines.
Use Mutexes when:
- Multiple goroutines need to access shared data structures (maps, slices, counters)
- You’re implementing traditional data structures (caches, registries, connection pools)
- The critical section is small and fast
- You need fine-grained control over locking (like RWMutex for read-heavy workloads)
Use Channels when:
- Transferring ownership of data between goroutines
- Coordinating goroutines (signaling, waiting, cancellation)
- Building pipelines or processing stages
- The Go proverb applies: “Share memory by communicating, not communicate by sharing memory”
A helpful mental model: mutexes protect data, channels coordinate behavior. If you’re guarding a variable, use a mutex. If you’re passing messages, use a channel. Some problems naturally fit one model better - a cache is clearly mutex territory, while a worker pool naturally uses channels for task distribution.
| Use Mutex | Use Channels |
|---|---|
| Protecting shared data | Passing ownership of data |
| Simple state guards | Coordinating goroutines |
| Critical sections | Signaling events |
| Caching | Pipeline patterns |
Key Takeaways
Section titled “Key Takeaways”- WaitGroup - wait for goroutines to complete
- Mutex - exclusive access to shared data
- RWMutex - multiple readers, single writer
- sync.Once - one-time initialization
- sync.Pool - reuse temporary objects
- Atomics - lock-free simple operations
- Don’t copy sync types - always pass by pointer
Exercise
Section titled “Exercise”Thread-Safe Counter with Stats
Build a counter that tracks not just the count, but also how many times it was incremented and what the maximum concurrent readers were during a Value() call.
Next up: Chapter 6: Concurrency Patterns