CI/CD Integration
Add Unfault to your pipeline. Read more
In this tutorial, we’ll add Unfault to an existing project, make sense of the initial findings, and establish a practical workflow. Real codebases have history, constraints, and context that tutorials often ignore. We’ll address that head-on.
You’ll need:
This works best with a project you know well. The findings will make more sense when you understand the code’s history and constraints.
Navigate to your project
cd your-projectRun a review
unfault reviewSee what happens
Unfault scans your codebase, detects languages and frameworks, and reports what it finds.
Your output might look something like:
Looks good overall, with a couple spots that deserve a closer look. Two themeskeep showing up: other cleanup and resilience hardening. Starting point:weather.py (HTTP call to external service in `get_weather` lacks circuitbreake...); then weather.py (FastAPI app 'app' missing correlation IDmiddleware).
At a glance · 2 calls without timeouts — could hang if a service is slow · Circuit breakers would help fail fast when dependencies are down · Rate limiting would protect against abuse · CORS config needed if browsers will call this API
───────────────────────────────────────────────────────────────────────────────1112ms - python / fastapi - 1 fileTip: use --output full to drill into hotspots.Don’t panic. This is normal. Every codebase has findings. The goal isn’t zero findings; it’s understanding what’s there and making informed decisions.
A mature codebase will often have dozens of findings on the first run. This doesn’t mean the code is bad. It means there are patterns that could cause problems, and now you can see them.
Think of it like turning on the lights in a room you’ve been navigating in the dark. The furniture was always there; now you can see it.
Start by filtering to what you care about most:
# If reliability is your concernunfault review --dimension stability
# If you're optimizing performanceunfault review --dimension performance
# If you're preparing for a security auditunfault review --dimension securityThis narrows the list to findings in your current focus area.
Look at where findings cluster:
unfault review --output json | jq '.findings | group_by(.file) | map({file: .[0].file, count: length}) | sort_by(.count) | reverse | .[0:10]'Or just scan the output visually. Are findings concentrated in one area? That might be legacy code that needs attention, or a module with different requirements.
Not every finding needs action. Here’s how to think about triage:
High severity findings are patterns that tend to cause production incidents. They deserve attention, but “attention” doesn’t always mean “fix immediately.”
Ask yourself:
Medium findings are real issues that aren’t urgent. They make good candidates for:
Low severity findings are minor improvements. They’re worth knowing about but rarely worth dedicated effort. Address them opportunistically when you’re already changing that code.
Add Unfault to CI and check for findings:
unfault reviewExit code 5 means findings were detected; exit code 0 means clean. In your CI script, you can parse JSON output to filter by severity:
unfault review --output json | jq '[.findings[] | select(.severity == "high")] | length'Run reviews regularly and track the trend. Are findings increasing or decreasing? Is new code cleaner than old code?
unfault review --output json > findings-$(date +%Y%m%d).jsonIn CI, analyze only what changed by running from the appropriate directory:
# Review only a specific subdirectorycd services/api && unfault reviewOr use your CI system’s path filtering to trigger Unfault only when relevant files change.
Sometimes a finding is valid but not applicable. You can suppress it:
response = requests.get(internal_url) # Internal service, sub-ms latencysuppress: - rule: python.http.missing_timeout paths: - "scripts/*" # One-off scripts, not productionHere’s a practical approach for addressing findings over time:
Run the review. Read through the findings. Don’t fix anything yet. Just understand what’s there.
Go through high-severity findings. For each one:
Work through the tickets. Start with findings in code you’re already changing. Fix them as part of normal work, not as a separate “cleanup” project.
Run reviews in CI. New code should be cleaner than old code. Over time, the baseline improves.
In this tutorial, you:
The first run is always the noisiest. It gets quieter as you address the highest-risk patterns and establish baseline expectations.