Skip to content

python.unbounded_memory

Stability High

Detects operations that can consume unbounded memory, leading to out-of-memory errors and service crashes.

Unbounded memory operations can:

  • Crash your service — OOM killer terminates the process
  • Degrade performance — Memory pressure causes GC thrashing
  • Affect neighbors — Other services on the same node suffer
  • Enable DoS attacks — Malicious input triggers memory exhaustion
# ❌ Before (unbounded memory)
def process_file(file_path):
with open(file_path) as f:
lines = f.readlines() # Loads entire file into memory
return process_lines(lines)
def fetch_all_users():
return list(User.query.all()) # Loads all users into memory
# ✅ After (bounded/streaming)
def process_file(file_path):
with open(file_path) as f:
for line in f: # Streams line by line
yield process_line(line)
def fetch_users_batch(batch_size=1000):
offset = 0
while True:
batch = User.query.offset(offset).limit(batch_size).all()
if not batch:
break
yield from batch
offset += batch_size
  • file.readlines() without size limits
  • file.read() without size parameter
  • ORM queries that load all records (query.all(), list(query))
  • Unbounded list comprehensions with external data
  • Growing collections without bounds

Unfault generates patches that add streaming and pagination:

# Streaming file reads
def read_large_file(path, chunk_size=8192):
with open(path, 'rb') as f:
while chunk := f.read(chunk_size):
yield chunk
# Paginated database queries
def iter_all_items(session, model, batch_size=1000):
offset = 0
while True:
batch = session.query(model).limit(batch_size).offset(offset).all()
if not batch:
return
yield from batch
offset += batch_size
# Bounded collections
from collections import deque
# Fixed-size buffer
buffer = deque(maxlen=1000)
# Memory-mapped files for large data
import mmap
with open('large_file.bin', 'rb') as f:
mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
# Process without loading entire file