Skip to content

Adding to Your Project

In this tutorial, we’ll add Unfault to an existing project, make sense of the initial findings, and establish a practical workflow. Real codebases have history, constraints, and context that tutorials often ignore. We’ll address that head-on.

You’ll need:

  • A project with some code (Python, Go, Rust, or TypeScript)
  • Unfault installed (installation guide)
  • A few minutes to explore

This works best with a project you know well. The findings will make more sense when you understand the code’s history and constraints.

  1. Navigate to your project

    Terminal window
    cd your-project
  2. Run a review

    Terminal window
    unfault review
  3. See what happens

    Unfault scans your codebase, detects languages and frameworks, and reports what it finds.

Your output might look something like:

Looks good overall, with a couple spots that deserve a closer look. Two themes
keep showing up: other cleanup and resilience hardening. Starting point:
weather.py (HTTP call to external service in `get_weather` lacks circuit
breake...); then weather.py (FastAPI app 'app' missing correlation ID
middleware).
At a glance
· 2 calls without timeouts — could hang if a service is slow
· Circuit breakers would help fail fast when dependencies are down
· Rate limiting would protect against abuse
· CORS config needed if browsers will call this API
───────────────────────────────────────────────────────────────────────────────
1112ms - python / fastapi - 1 file
Tip: use --output full to drill into hotspots.

Don’t panic. This is normal. Every codebase has findings. The goal isn’t zero findings; it’s understanding what’s there and making informed decisions.

A mature codebase will often have dozens of findings on the first run. This doesn’t mean the code is bad. It means there are patterns that could cause problems, and now you can see them.

Think of it like turning on the lights in a room you’ve been navigating in the dark. The furniture was always there; now you can see it.

Start by filtering to what you care about most:

Terminal window
# If reliability is your concern
unfault review --dimension stability
# If you're optimizing performance
unfault review --dimension performance
# If you're preparing for a security audit
unfault review --dimension security

This narrows the list to findings in your current focus area.

Look at where findings cluster:

Terminal window
unfault review --output json | jq '.findings | group_by(.file) | map({file: .[0].file, count: length}) | sort_by(.count) | reverse | .[0:10]'

Or just scan the output visually. Are findings concentrated in one area? That might be legacy code that needs attention, or a module with different requirements.

Not every finding needs action. Here’s how to think about triage:

High severity findings are patterns that tend to cause production incidents. They deserve attention, but “attention” doesn’t always mean “fix immediately.”

Ask yourself:

  • Is this code in a hot path? A timeout issue in startup code matters less than one in request handling.
  • Is there context that makes this safe? Maybe the “external” service is actually localhost.
  • Is this a known risk we’ve accepted? Sometimes you know about an issue and have decided to live with it.

Medium findings are real issues that aren’t urgent. They make good candidates for:

  • Tech debt sprints
  • “While you’re in there” improvements
  • Onboarding tasks for new team members

Low severity findings are minor improvements. They’re worth knowing about but rarely worth dedicated effort. Address them opportunistically when you’re already changing that code.

Add Unfault to CI and check for findings:

Terminal window
unfault review

Exit code 5 means findings were detected; exit code 0 means clean. In your CI script, you can parse JSON output to filter by severity:

Terminal window
unfault review --output json | jq '[.findings[] | select(.severity == "high")] | length'

Run reviews regularly and track the trend. Are findings increasing or decreasing? Is new code cleaner than old code?

Terminal window
unfault review --output json > findings-$(date +%Y%m%d).json

In CI, analyze only what changed by running from the appropriate directory:

Terminal window
# Review only a specific subdirectory
cd services/api && unfault review

Or use your CI system’s path filtering to trigger Unfault only when relevant files change.

Sometimes a finding is valid but not applicable. You can suppress it:

ignore[python.http.missing_timeout]
response = requests.get(internal_url) # Internal service, sub-ms latency

Here’s a practical approach for addressing findings over time:

Run the review. Read through the findings. Don’t fix anything yet. Just understand what’s there.

Go through high-severity findings. For each one:

  • If it’s a real risk: create a ticket
  • If it’s a false positive: suppress with a comment
  • If you’re not sure: leave it for now

Work through the tickets. Start with findings in code you’re already changing. Fix them as part of normal work, not as a separate “cleanup” project.

Run reviews in CI. New code should be cleaner than old code. Over time, the baseline improves.

In this tutorial, you:

  • Ran Unfault on a real project
  • Made sense of the initial findings
  • Learned how to triage by severity and dimension
  • Saw options for establishing a workflow
  • Understood how to suppress findings when appropriate

The first run is always the noisiest. It gets quieter as you address the highest-risk patterns and establish baseline expectations.

CI/CD Integration

Add Unfault to your pipeline. Read more

Configuration

Customize behavior for your project. Read more

Suppressing Rules

More on managing findings. Read more

VS Code Extension

Get findings in your editor. Read more