Skip to content

Copy/Paste Oneliners

No explanation needed. Just copy, paste, and run.

Terminal window
# Summary review: header + findings narrative
unfault review
# All findings grouped by severity and rule (linter view)
unfault lint
# Detailed output with suggested fixes
unfault review --output full
# Machine-readable JSON
unfault review --output json
# Focus on stability only
unfault review --dimension stability

Structured output for Claude, Cursor, Copilot, and other AI assistants.

Terminal window
# JSON output for programmatic parsing
unfault review --output json
# Focus on specific dimensions
unfault review --dimension stability --output json
unfault review --dimension correctness --output json
# Impact analysis before changes
unfault graph impact src/api/routes.py --json
unfault graph critical --json --limit 5

Add this to your AGENTS.md or system prompt:

Before committing, run:
\`\`\`bash
unfault review --output json
\`\`\`
If high-severity findings are reported, address them before committing.
Before changing a file, check its blast radius:
\`\`\`bash
unfault graph impact <file>
\`\`\`
Terminal window
# SARIF for GitHub Code Scanning
unfault review --output sarif > results.sarif
.github/workflows/unfault.yml
- name: Run Unfault
run: unfault review --output sarif > results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarif
Terminal window
# Fail if findings detected (exit 5)
unfault review && echo "Clean!" || echo "Findings found"
# Explicit check
unfault review --output json
if [ $? -eq 5 ]; then
echo "Review the findings above"
exit 1
fi
Terminal window
# Pipe to jq for filtering
unfault review --output json | jq '.contexts[].findings | map(select(.severity == "High"))'
# Count findings
unfault review --output json | jq '[.contexts[].findings] | flatten | length'
Terminal window
# Full detailed output with suggested fixes
unfault review --output full
# Focus on one dimension
unfault review --dimension stability --output full
unfault review --dimension performance --output full
unfault review --dimension correctness --output full
# Preview fixes without applying
unfault review --dry-run
Terminal window
# What breaks if I change this file?
unfault graph impact src/core/auth.py
# Find the most connected files
unfault graph critical --limit 10
# What uses this library?
unfault graph library requests
unfault graph library httpx
# External dependencies of a file
unfault graph deps src/api/client.py
# Graph statistics
unfault graph stats
Terminal window
# Configure LLM for AI-powered review summaries
unfault config llm openai --model gpt-4o
unfault config llm anthropic --model claude-3-5-sonnet-latest
unfault config llm ollama --model llama3.2
# View current config
unfault config show
unfault config llm show
# Check observability integrations
unfault config integrations show
unfault config integrations verify
FlagWhat it does
--output fullDetailed output with fix suggestions
--output jsonMachine-readable JSON
--output sarifSARIF 2.1.0 for GitHub/IDE integration
--output conciseBrief statistics only
--dimension XFocus on one dimension
--dry-runPreview fixes without applying
--fixAuto-apply suggested fixes
--offlineSkip SLO and trace fetching
--refresh-cacheRe-fetch observability data
CodeMeaning
0Success, no findings
5Success, findings detected
1General error
2Config error
4Network error
6Invalid input

AI Agents Guide

Full integration patterns. Read more

CI/CD Guide

Pipeline integration details. Read more

CLI Reference

Complete command docs. Read more