go.unbounded_goroutines
Scalability
High
Causes Production Outages
Detects go func() calls without bounds on concurrent goroutines.
Why It Matters
Section titled “Why It Matters”Unbounded goroutine spawning exhausts resources:
- Memory exhaustion — Each goroutine uses ~2KB+ stack
- CPU thrashing — Too many goroutines competing for CPU
- Downstream collapse — Thousands of requests hit your database simultaneously
- OOM crash — Eventually the process is killed
Input-controlled fan-out is especially dangerous—attackers can DoS your service.
Example
Section titled “Example”// ❌ Beforefor _, item := range items { go process(item) // Spawns len(items) goroutines}If items has 100,000 elements, you spawn 100,000 goroutines simultaneously.
// ✅ Aftersem := make(chan struct{}, 100) // Limit to 100 concurrent
for _, item := range items { sem <- struct{}{} // Acquire go func(item Item) { defer func() { <-sem }() // Release process(item) }(item)}What Unfault Detects
Section titled “What Unfault Detects”go func()inside loops without semaphore- Goroutine spawning driven by external input
- Missing worker pool patterns
Auto-Fix
Section titled “Auto-Fix”Unfault can wrap loop-spawned goroutines with semaphore-based limiting when the transformation is straightforward.
Best Practices
Section titled “Best Practices”// Worker pool patternfunc processItems(items []Item, workers int) { ch := make(chan Item, len(items)) var wg sync.WaitGroup
// Fixed number of workers for i := 0; i < workers; i++ { wg.Add(1) go func() { defer wg.Done() for item := range ch { process(item) } }() }
// Feed items to workers for _, item := range items { ch <- item } close(ch) wg.Wait()}
// errgroup with limitimport "golang.org/x/sync/errgroup"
g, ctx := errgroup.WithContext(ctx)g.SetLimit(100)
for _, item := range items { item := item g.Go(func() error { return process(ctx, item) })}return g.Wait()