python.unbounded_concurrency
Scalability
High
Causes Production Outages
Detects asyncio.gather() with unbounded list or thread pool without max workers.
Why It Matters
Section titled “Why It Matters”Unbounded concurrency exhausts system resources:
- File descriptor exhaustion — Each connection uses a file descriptor
- Memory exhaustion — Each task/thread consumes memory
- Downstream overload — Thousands of simultaneous requests can DoS your database
- Cascade failures — One component failing under load brings down others
The problem scales with input size. Small tests pass; production traffic crashes.
Example
Section titled “Example”# ❌ Beforeasync def process_all(items): await asyncio.gather(*[process(item) for item in items])If items has 10,000 elements, you spawn 10,000 concurrent tasks.
# ✅ Afterasync def process_all(items): semaphore = asyncio.Semaphore(100)
async def limited_process(item): async with semaphore: return await process(item)
await asyncio.gather(*[limited_process(item) for item in items])Now maximum 100 tasks run concurrently.
What Unfault Detects
Section titled “What Unfault Detects”asyncio.gather(*[...])patternsasyncio.gather(*tasks)wheretaskscomes from unbounded iterationThreadPoolExecutor()withoutmax_workersProcessPoolExecutor()withoutmax_workers
Auto-Fix
Section titled “Auto-Fix”Unfault can wrap the gather pattern with a semaphore-based limiter when the transformation is unambiguous.
Best Practices
Section titled “Best Practices”# Use asyncio.Semaphore for async codesemaphore = asyncio.Semaphore(100)
# Use bounded executors for thread/process poolsexecutor = ThreadPoolExecutor(max_workers=10)
# Or use libraries with built-in limitingimport aiohttpconnector = aiohttp.TCPConnector(limit=100)