Skip to content

Dimensions

Unfault groups facts into dimensions based on what aspect of your system they affect. This isn’t just categorization for its own sake. It helps you focus on what matters for your particular situation.

A high-traffic API gateway cares deeply about performance. A financial service prioritizes correctness. A public-facing app needs to think about security. Dimensions let you filter facts to match your priorities.

How well does your system handle the unexpected?

Stability findings surface patterns that tend to cause cascading failures, hung requests, or unrecoverable states. Things like:

  • External calls without timeouts
  • Missing circuit breakers on flaky dependencies
  • Unbounded retries that amplify load during outages
  • Resource leaks that accumulate over time

These patterns often work fine in normal conditions. They become problems when something goes wrong elsewhere, and your code’s response makes it worse.

Where might your system slow down under load?

Performance findings identify code that may become a bottleneck:

  • Blocking I/O in async contexts
  • N+1 query patterns
  • CPU-intensive work on the event loop
  • Regex compilation in hot paths
  • Unbounded caches that grow forever

Not every performance finding needs immediate action. A slow path that runs once at startup is different from one in your request handler. The findings give you visibility; you decide what matters.

Does your code do what it should?

Correctness findings catch logic issues that could produce wrong results:

  • Race conditions in shared state
  • Recursive functions without base cases
  • Unsafe deserialization of untrusted input
  • Integer overflow in arithmetic operations

These are closer to traditional bugs, but Unfault looks for patterns that static analysis and tests often miss, especially concurrency issues and edge cases in error handling.

What attack surface does your code expose?

Security findings highlight potential vulnerabilities:

  • SQL injection via string concatenation
  • Command injection through unsanitized input
  • Hardcoded credentials or secrets
  • Missing authentication on sensitive routes
  • Insecure defaults in cryptographic operations

Security findings tend to require careful evaluation. Context matters: an internal tool has different threat models than a public API.

Can your system recover gracefully?

Reliability overlaps with stability but focuses on recovery:

  • Missing error handlers on critical paths
  • Silent exception swallowing
  • No fallback when dependencies fail
  • Incomplete cleanup in error paths

A reliable system doesn’t just avoid failures; it handles them well when they occur.

Can you see what’s happening?

Observability findings identify blind spots in your monitoring:

  • Missing correlation IDs across service boundaries
  • Log statements that omit crucial context
  • Untracked external calls
  • Missing metrics on key operations

When something goes wrong at 3am, observability determines whether you debug for 10 minutes or 10 hours.

What breaks when traffic grows?

Scalability findings look at patterns that work at low scale but fail at high scale:

  • Linear scans where indexes exist
  • Unbounded result sets without pagination
  • Connection pooling issues
  • Memory allocation patterns that fragment under load

How hard is this code to change safely?

Maintainability is opt-in and not included in default analysis. When enabled, it identifies:

  • High cyclomatic complexity
  • Deep nesting
  • Long functions
  • Circular dependencies

These aren’t bugs. They’re friction. Code that’s hard to understand is code that’s easy to break.

Focus your review on specific concerns:

Terminal window
# Only stability issues
unfault review --dimension stability
# Stability and performance
unfault review --dimension stability --dimension performance

By default, Unfault analyzes for:

  • Stability
  • Performance
  • Correctness
  • Security
  • Reliability
  • Observability
  • Scalability

Maintainability is excluded unless explicitly requested.

Different projects have different priorities:

Project TypeFocus Dimensions
High-traffic APIStability, Performance, Scalability
Financial systemCorrectness, Security, Reliability
Internal toolCorrectness, Maintainability
Public-facing appSecurity, Stability, Performance

There’s no universal answer. The dimensions are a lens, not a prescription.

Dimensions and severity are orthogonal:

  • Dimension: What aspect of the system is affected
  • Severity: How urgent is this finding

A high-severity stability finding means “this will likely cause problems soon.” A low-severity performance finding means “this could matter at scale, but isn’t urgent.”

You might filter by dimension to focus your review, then prioritize by severity within that set.

Rules Catalog

See all rules organized by dimension. Browse rules

Configuration

Customize which dimensions to analyze. Read more

CLI Usage

Filter findings in practice. Read more