Skip to content

How It Works

When you run unfault review, everything happens locally. There is no API, no cloud analysis, no data leaving your machine. Understanding this helps explain why Unfault behaves the way it does.

  1. Parse locally. Unfault reads your source files using Tree-sitter and extracts a semantic model: functions, calls, imports, routes.
  2. Build a code graph. File-level semantics are merged into a unified graph capturing how things connect across your whole project.
  3. Run analysis in-process. The unfault-analysis engine runs rules against the graph and produces findings.
  4. Show results. Findings appear in your terminal (or CI output), formatted for the mode you chose.

Most analysis tools either send your source code to a server or require heavy language servers. Unfault does neither.

Parsing and analysis run in the same process as the CLI, using the same code your project runs with. This means:

  • Privacy. Your source code never leaves your machine.
  • Speed. Parsing runs in parallel on your hardware. No round-trips.
  • No account needed. The core CLI is open source and works offline.
  • Consistency. The same analysis runs on your laptop and in CI.

When Unfault parses your code, it builds a graph with nodes and edges:

Nodes represent things in your code:

  • Files
  • Functions and methods
  • Classes
  • Imports (internal and external)
  • Framework constructs (routes, middleware, handlers)

Edges represent relationships:

  • Contains (file contains function)
  • Calls (function A calls function B)
  • Imports (file imports module)
  • Inherits (class extends another)
  • Framework wiring (app registers route, route uses middleware)

This graph captures the structure of your code without the content. The analysis engine can reason about “function fetch_user calls requests.get with no timeout” without touching the actual URL or request body.

Here’s what happens when you run a review:

Unfault scans your project to understand what it’s looking at:

  • Which languages are present (Python, Go, Rust, TypeScript/JavaScript)
  • Which frameworks are in use (FastAPI, Express, Gin, Axum, Next.js, etc.)
  • Project structure and entry points

For each source file, Unfault:

  • Parses the syntax tree with Tree-sitter
  • Extracts semantic information (functions, classes, calls, imports)
  • Detects framework-specific patterns (route decorators, middleware registration)
  • Builds the local portion of the code graph

Individual file semantics get merged into a unified graph. This is where Unfault resolves:

  • Which function calls go where
  • How imports connect files
  • Framework topology (which routes exist, what middleware applies)

The unfault-analysis engine runs rules against the graph. Rules are organized by framework profile (e.g., python_fastapi_backend, go_gin_service) and dimension (stability, correctness, performance, scalability).

Each rule produces findings: observations about your code. A finding includes what was detected, where it is, why it matters, and a suggested fix when possible.

If you have observability integrations configured (GCP Cloud Monitoring, Datadog, Dynatrace), Unfault can fetch SLO data and enrich findings with production context: which routes have SLO coverage and which don’t. This step is skipped with --offline.

Results are formatted for your chosen output mode and printed to stdout.

To be concrete, if you have this code:

def fetch_user(user_id: str) -> dict:
response = requests.get(f"https://api.example.com/users/{user_id}")
return response.json()

The analysis sees something like:

{
"functions": [{
"name": "fetch_user",
"file": "users.py",
"calls": [{"target": "requests.get", "has_timeout": false}]
}]
}

No URL. No variable names beyond what’s needed for the graph. No string literals. Just enough structure to identify the pattern.

Dimensions

How findings are categorized. Read more

Explore the Code Graph

Impact analysis, dependency queries, critical files. Read more

CLI Reference

All commands and flags. Read more