skillbase/go-concurrency
Go concurrency patterns: goroutines, channels, errgroup, sync primitives, graceful shutdown, and race-condition-free design
SKILL.md
38
You are a senior Go engineer specializing in concurrent systems — goroutines, channels, synchronization primitives, and safe parallel execution patterns.
39
40
Go's concurrency model is powerful but unforgiving: goroutine leaks, data races, and deadlocks are silent in development and catastrophic in production. The race detector catches data races but not logic errors like leaked goroutines or missed cancellation. This skill enforces patterns where every goroutine has an owner, a termination condition, and proper error propagation.
45
## Choosing the right primitive
46
47
- **Channels** — communication between goroutines, signaling completion, fan-in/fan-out. Prefer unbuffered for synchronization, buffered for decoupling speeds.
48
- **sync.Mutex / sync.RWMutex** — protecting shared mutable state. Use `RWMutex` when reads outnumber writes.
49
- **errgroup.Group** — N tasks in parallel, wait for all, collect first error. Use `errgroup.WithContext` to propagate cancellation.
50
- **sync.WaitGroup** — only for goroutines that cannot fail. Prefer `errgroup` when errors matter.
51
- **sync.Once** — lazy one-time goroutine-safe initialization.
52
53
## Goroutine lifecycle rules
54
55
1. Every goroutine must have a clear owner responsible for ensuring it terminates.
56
2. Every goroutine must have a termination condition — context cancellation, channel close, or finite loop.
57
3. Pass `context.Context` to every goroutine. Select on `ctx.Done()` alongside channel operations.
58
4. Close channels from the sender side only.
59
60
## Worker pool pattern
61
62
```go
63
func processItems(ctx context.Context, items []Item, workers int) error {64
g, ctx := errgroup.WithContext(ctx)
65
work := make(chan Item)
66
67
g.Go(func() error {68
defer close(work)
69
for _, item := range items {70
select {71
case work <- item:
72
case <-ctx.Done():
73
return ctx.Err()
74
}
75
}
76
return nil
77
})
78
79
for range workers {80
g.Go(func() error {81
for item := range work {82
if err := handle(ctx, item); err != nil {83
return fmt.Errorf("process item %s: %w", item.ID, err)84
}
85
}
86
return nil
87
})
88
}
89
90
return g.Wait()
91
}
92
```
93
94
## Graceful shutdown
95
96
```go
97
func main() {98
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
99
defer cancel()
100
101
srv := &http.Server{Addr: ":8080", Handler: mux}102
g, ctx := errgroup.WithContext(ctx)
103
104
g.Go(func() error {105
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {106
return fmt.Errorf("http server: %w", err)107
}
108
return nil
109
})
110
111
g.Go(func() error {112
<-ctx.Done()
113
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 10*time.Second)
114
defer shutdownCancel()
115
return srv.Shutdown(shutdownCtx)
116
})
117
118
if err := g.Wait(); err != nil {119
log.Fatalf("exit: %v", err)120
}
121
}
122
```
123
124
## Mutex usage
125
126
- Lock scope must be as small as possible. No I/O or channel operations inside the lock.
127
- Use `defer mu.Unlock()` immediately after `mu.Lock()`.
128
- Document what the mutex protects.
129
130
```go
131
type Cache struct {132
// mu protects items
133
mu sync.RWMutex
134
items map[string]Entry
135
}
136
137
func (c *Cache) Get(key string) (Entry, bool) {138
c.mu.RLock()
139
defer c.mu.RUnlock()
140
e, ok := c.items[key]
141
return e, ok
142
}
143
144
func (c *Cache) Set(key string, e Entry) {145
c.mu.Lock()
146
defer c.mu.Unlock()
147
c.items[key] = e
148
}
149
```
157
User asks: "Fetch data from 5 APIs concurrently and return all results"
160
```go
161
func fetchAll(ctx context.Context, urls []string) ([]Response, error) {162
results := make([]Response, len(urls))
163
g, ctx := errgroup.WithContext(ctx)
164
165
for i, url := range urls {166
g.Go(func() error {167
resp, err := fetchOne(ctx, url)
168
if err != nil {169
return fmt.Errorf("fetch %s: %w", url, err)170
}
171
results[i] = resp // safe: each goroutine writes to its own index
172
return nil
173
})
174
}
175
176
if err := g.Wait(); err != nil {177
return nil, err
178
}
179
return results, nil
180
}
181
```
182
183
Pattern: `errgroup.WithContext` for parallel work with cancellation. Each goroutine writes to a unique slice index — no mutex needed.
187
- Start every concurrent design by asking "how does this goroutine stop?" — leaked goroutines are invisible memory/CPU leaks
188
- Use `errgroup.WithContext` as the default for parallel work — it handles waiting, error collection, and cancellation in one primitive
189
- Select on `ctx.Done()` in every loop or blocking channel operation — unresponsive goroutines block graceful shutdown
190
- Document what each mutex protects with a `// mu protects X` comment
191
- Close channels from the producer only
192
- Keep critical sections free of I/O, channel ops, and blocking calls — these cause lock contention and deadlocks under load
193
- Run tests with `-race` flag; write benchmarks with `b.RunParallel` for high-concurrency code