Skip to content

go.unbounded_goroutines

Scalability High Causes Production Outages

Detects go func() calls without bounds on concurrent goroutines.

Unbounded goroutine spawning exhausts resources:

  • Memory exhaustion — Each goroutine uses ~2KB+ stack
  • CPU thrashing — Too many goroutines competing for CPU
  • Downstream collapse — Thousands of requests hit your database simultaneously
  • OOM crash — Eventually the process is killed

Input-controlled fan-out is especially dangerous—attackers can DoS your service.

// ❌ Before
for _, item := range items {
go process(item) // Spawns len(items) goroutines
}

If items has 100,000 elements, you spawn 100,000 goroutines simultaneously.

// ✅ After
sem := make(chan struct{}, 100) // Limit to 100 concurrent
for _, item := range items {
sem <- struct{}{} // Acquire
go func(item Item) {
defer func() { <-sem }() // Release
process(item)
}(item)
}
  • go func() inside loops without semaphore
  • Goroutine spawning driven by external input
  • Missing worker pool patterns

Unfault can wrap loop-spawned goroutines with semaphore-based limiting when the transformation is straightforward.

// Worker pool pattern
func processItems(items []Item, workers int) {
ch := make(chan Item, len(items))
var wg sync.WaitGroup
// Fixed number of workers
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for item := range ch {
process(item)
}
}()
}
// Feed items to workers
for _, item := range items {
ch <- item
}
close(ch)
wg.Wait()
}
// errgroup with limit
import "golang.org/x/sync/errgroup"
g, ctx := errgroup.WithContext(ctx)
g.SetLimit(100)
for _, item := range items {
item := item
g.Go(func() error {
return process(ctx, item)
})
}
return g.Wait()