Adding to Your Project
Run Unfault on your actual codebase. Continue
In this tutorial, we’ll analyze a small service, understand what Unfault reveals, and see how to think about the findings. By the end, you’ll know how to read Unfault’s output and decide what to do with it.
We’ll use a simple weather service as our example. Even if you don’t write Python, the patterns are universal: fetching external data, handling errors, returning responses. The same issues show up in every language.
Imagine a small service that fetches weather data from an external API and returns it to users. Here’s the code:
import requestsfrom fastapi import FastAPI
app = FastAPI()
@app.get("/weather/{city}")async def get_weather(city: str): response = requests.get(f"https://api.weather.example/v1/{city}") data = response.json() return {"city": city, "temperature": data["temp"], "conditions": data["weather"]}This code works. It fetches weather data and returns it. If you wrote tests, they’d pass. If you deployed it, it would serve requests.
But there are things hiding in this code that only show up in production.
Create a file
Save the code above as weather.py in a new directory.
Run Unfault
unfault reviewSee the output
Looks good overall, with a couple spots that deserve a closer look. Two themes keep showing up: other cleanup and resilience hardening. Starting point: weather.py (HTTP call to external service in `get_weather` lacks circuit breake...); then weather.py (FastAPI app 'app' missing correlation ID middleware).
At a glance · 2 calls without timeouts — could hang if a service is slow · Circuit breakers would help fail fast when dependencies are down · Rate limiting would protect against abuse · CORS config needed if browsers will call this API
──────────────────────────────────────────────────────────────────────────────── 1112ms - python / fastapi - 1 file Tip: use --output full to drill into hotspots.By default, the output is short to not induce anxiety of “ouch, I have now to pay attention to all of this?”.
To get more details you can run:
unfault review --output=conciseThis will output:
Hotspots → weather.py (7 signals) other: 3 signals [python.http.blocking_in_async] Blocking HTTP call via `requests.get` inside async function `get_weather` weather.py:10 [python.fastapi.missing_rate_limiting] FastAPI application lacks rate limiting protection weather.py:1 [fastapi.missing_cors] FastAPI app `app` has no CORS middleware configured weather.py:6 resilience: 3 signals [python.resilience.missing_circuit_breaker] HTTP call to external service in `get_weather` lacks circuit breaker protection weather.py:10 [python.fastapi.missing_request_timeout] FastAPI app `app` has no request timeout middleware weather.py:6 [python.http.missing_retry] HTTP call via `requests.get` has no retry policy weather.py:10 observability: 1 signal [python.missing_correlation_id] FastAPI app 'app' missing correlation ID middleware weather.py:6
→ Run with --raw-findings for raw output (advanced)
──────────────────────────────────────────────────────────────────────────────── 1735ms - python / fastapi - 1 file Tip: use --output full to drill into hotspots.The concise output groups findings by theme (like resilience or observability). Each line includes:
python.http.blocking_in_async)Let’s walk through the findings shown in the analysis output above.
[python.http.blocking_in_async] Blocking HTTP call via `requests.get` inside async function `get_weather`What it means: The function is declared async, but requests.get is a blocking call. When this runs, it blocks the entire event loop. Other requests wait. Under load, your service grinds to a halt.
Why it matters: This is a common mistake when mixing async frameworks with synchronous libraries. FastAPI is async, but requests is not. They don’t play well together.
What you might do: Use an async HTTP client like httpx or aiohttp:
import httpx
async with httpx.AsyncClient() as client: response = await client.get(f"https://api.weather.example/v1/{city}")[python.fastapi.missing_rate_limiting] FastAPI application lacks rate limiting protectionWhat it means: Anyone can hit this endpoint as fast as they want. A single client (or a small botnet) can exhaust CPU, memory, and outbound bandwidth.
Why it matters: Even if your code is correct, unbounded request rates turn minor slowness into outages. Rate limiting also protects your upstream dependencies (the weather API) and your wallet.
What you might do: Add rate limiting at the edge (API gateway / ingress) or in-app middleware. For example, FastAPI commonly uses a library like slowapi (or you can enforce limits at NGINX/Envoy):
# Example shape only; choose a limiter that matches your stack.from slowapi import Limiterfrom slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)app.state.limiter = limiter
@app.get("/weather/{city}")@limiter.limit("60/minute")async def get_weather(city: str): ...[fastapi.missing_cors] FastAPI app `app` has no CORS middleware configuredWhat it means: If a browser-based frontend calls this API from a different origin (domain/port), the browser will block requests unless you explicitly allow them.
Why it matters: Teams often discover this late, when the frontend is added. The “fix” is easy, but doing it hastily can accidentally allow any website to call your API.
What you might do: Configure CORS explicitly and narrowly:
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware( CORSMiddleware, allow_origins=["https://app.example.com"], allow_credentials=True, allow_methods=["GET"], allow_headers=["*"],)[python.resilience.missing_circuit_breaker] HTTP call to external service in `get_weather` lacks circuit breaker protectionWhat it means: When the weather API is unhealthy, your service will keep trying anyway. That amplifies failure (queues build up, threads/event loop time gets consumed) instead of failing fast.
Why it matters: Circuit breakers are a resilience primitive: they stop repeated calls to a failing dependency, give it time to recover, and protect your own service from cascading failure.
What you might do: Add a circuit breaker around the outbound call (or use one provided by your platform/service mesh):
# Example shape only; choose a breaker that matches your stack.import pybreaker
breaker = pybreaker.CircuitBreaker(fail_max=5, reset_timeout=30)
@breakerdef fetch_weather_sync(url: str): return requests.get(url)[python.fastapi.missing_request_timeout] FastAPI app `app` has no request timeout middlewareWhat it means: A request to your API can run indefinitely (for example if an upstream dependency hangs or a handler stalls). You have no server-side deadline.
Why it matters: Server-side time limits are a last line of defense. Without them, slow or stuck requests accumulate until the service stops responding.
What you might do: Enforce an upper bound on request handling time (middleware, reverse proxy, or server settings). A simple pattern is to wrap the handler with a deadline:
import anyiofrom starlette.middleware.base import BaseHTTPMiddleware
class RequestTimeoutMiddleware(BaseHTTPMiddleware): def __init__(self, app, timeout_seconds: float = 10.0): super().__init__(app) self.timeout_seconds = timeout_seconds
async def dispatch(self, request, call_next): with anyio.fail_after(self.timeout_seconds): return await call_next(request)
app.add_middleware(RequestTimeoutMiddleware, timeout_seconds=10.0)[python.http.missing_retry] HTTP call via `requests.get` has no retry policyWhat it means: If the weather API returns a transient error (a 503, a connection reset), your service fails immediately. No second chances.
Why it matters: External services have hiccups. Networks are unreliable. A simple retry with backoff handles the vast majority of transient failures without user impact.
What you might do: Add retry logic with backoff and jitter (and only retry safe/idempotent operations). Libraries like tenacity make this straightforward:
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, max=10))async def fetch_weather(client, city): response = await client.get(f"https://api.weather.example/v1/{city}") response.raise_for_status() return response.json()[python.missing_correlation_id] FastAPI app 'app' missing correlation ID middlewareWhat it means: Requests and logs can’t be tied together with a shared id (for example X-Request-Id). When a user reports “that request failed”, you have no reliable way to trace the full path through services.
Why it matters: Correlation ids are the glue for debugging distributed systems. They speed up incident response, make logs searchable, and improve traceability.
What you might do: Add middleware that creates (or propagates) a request id and injects it into logs and responses:
import uuidfrom starlette.middleware.base import BaseHTTPMiddleware
class CorrelationIdMiddleware(BaseHTTPMiddleware): async def dispatch(self, request, call_next): correlation_id = request.headers.get("X-Request-Id") or str(uuid.uuid4()) response = await call_next(request) response.headers["X-Request-Id"] = correlation_id return response
app.add_middleware(CorrelationIdMiddleware)Notice that Unfault doesn’t fix the code for you. It surfaces patterns and explains why they matter. You decide what to do.
Maybe this is a quick prototype and you’ll add resilience later. Maybe the weather API is on the same machine and timeouts don’t matter. Maybe you have retry logic at a different layer.
Context matters. Unfault gives you the information; you make the call.
To see Unfault’s suggested patches:
unfault review --output fullThis shows diffs for each finding:
--- weather.py+++ weather.py@@ -1,4 +1,5 @@-import requests+import httpx from fastapi import FastAPI
app = FastAPI()
@app.get("/weather/{city}") async def get_weather(city: str):- response = requests.get(f"https://api.weather.example/v1/{city}")+ async with httpx.AsyncClient() as client:+ response = await client.get(+ f"https://api.weather.example/v1/{city}",+ timeout=5.0+ ) data = response.json()In this tutorial, you:
unfault review on a sample projectThe patterns here, blocking calls, missing timeouts, no retries, no error handling, show up in every codebase. They’re not bugs in the traditional sense. They’re gaps between “code that works” and “code that works in production.”