Rules Catalog
See all rules organized by dimension. Browse rules
Unfault groups facts into dimensions based on what aspect of your system they affect. This isn’t just categorization for its own sake. It helps you focus on what matters for your particular situation.
A high-traffic API gateway cares deeply about performance. A financial service prioritizes correctness. A public-facing app needs to think about security. Dimensions let you filter facts to match your priorities.
How well does your system handle the unexpected?
Stability findings surface patterns that tend to cause cascading failures, hung requests, or unrecoverable states. Things like:
These patterns often work fine in normal conditions. They become problems when something goes wrong elsewhere, and your code’s response makes it worse.
Where might your system slow down under load?
Performance findings identify code that may become a bottleneck:
Not every performance finding needs immediate action. A slow path that runs once at startup is different from one in your request handler. The findings give you visibility; you decide what matters.
Does your code do what it should?
Correctness findings catch logic issues that could produce wrong results:
These are closer to traditional bugs, but Unfault looks for patterns that static analysis and tests often miss, especially concurrency issues and edge cases in error handling.
What attack surface does your code expose?
Security findings highlight potential vulnerabilities:
Security findings tend to require careful evaluation. Context matters: an internal tool has different threat models than a public API.
Can your system recover gracefully?
Reliability overlaps with stability but focuses on recovery:
A reliable system doesn’t just avoid failures; it handles them well when they occur.
Can you see what’s happening?
Observability findings identify blind spots in your monitoring:
When something goes wrong at 3am, observability determines whether you debug for 10 minutes or 10 hours.
What breaks when traffic grows?
Scalability findings look at patterns that work at low scale but fail at high scale:
How hard is this code to change safely?
Maintainability is opt-in and not included in default analysis. When enabled, it identifies:
These aren’t bugs. They’re friction. Code that’s hard to understand is code that’s easy to break.
Focus your review on specific concerns:
# Only stability issuesunfault review --dimension stability
# Stability and performanceunfault review --dimension stability --dimension performanceBy default, Unfault analyzes for:
Maintainability is excluded unless explicitly requested.
Different projects have different priorities:
| Project Type | Focus Dimensions |
|---|---|
| High-traffic API | Stability, Performance, Scalability |
| Financial system | Correctness, Security, Reliability |
| Internal tool | Correctness, Maintainability |
| Public-facing app | Security, Stability, Performance |
There’s no universal answer. The dimensions are a lens, not a prescription.
Dimensions and severity are orthogonal:
A high-severity stability finding means “this will likely cause problems soon.” A low-severity performance finding means “this could matter at scale, but isn’t urgent.”
You might filter by dimension to focus your review, then prioritize by severity within that set.
Rules Catalog
See all rules organized by dimension. Browse rules
Configuration
Customize which dimensions to analyze. Read more
CLI Usage
Filter findings in practice. Read more