CLI Reference
Full option reference. Read more
The unfault ask command lets you query your project’s health using natural language. It searches through past reviews and findings to answer questions about patterns, issues, and trends.
When you run unfault review, the analysis results are indexed. The ask command uses retrieval-augmented generation (RAG) to search that index and find relevant context for your question.
This means ask can answer questions like:
unfault ask "What are my main stability concerns?"The command returns relevant findings and context from past reviews:
Found 2 instances of missing rate limiting insrc/environment/service.py, src/integration/service.py. Alsofound: cpu in event loop (2), missing retry (2).
→ src/plan/providers/github.py:87 missing retry (+1 more) 87 │ r = await h.put( → src/plan/providers/github.py:87 missing circuit breaker (+1 more) 87 │ r = await h.put( → src/plan/crud.py:70 cpu in event loop (+1 more) 70 │ plan_dict = orjson.loads(new_plan.model_dump_json())By default, ask returns raw context. For a more conversational answer, use the --llm flag:
unfault ask "How do we handle database timeouts?" --llmThis sends the retrieved context to your configured LLM and returns a synthesized response:
Based on the analysis of your codebase:
Your database connections currently don't specify explicit timeouts.The SQLAlchemy engine in src/core/database.py uses default settings,which means connections can hang indefinitely if the database becomesunresponsive.
Three files create database sessions:- src/core/database.py (engine creation)- src/api/deps.py (session dependency)- src/workers/tasks.py (background worker sessions)
Suggested fix: Add pool_timeout and connect_args to your engine:
create_engine( DATABASE_URL, pool_timeout=30, connect_args={"connect_timeout": 10} )Focus on a specific part of the codebase:
unfault ask "Performance issues" --path src/apiIf you work with multiple projects, scope to a specific workspace:
unfault ask "Recent issues" --workspace wks_abc123Control how much context is retrieved:
# More finding context (up to 50)unfault ask "Error handling" --max-findings 30
# More session context (up to 20)unfault ask "Recent trends" --max-sessions 10Increase the similarity threshold for more precise matches:
# Default is 0.5, higher means stricterunfault ask "Circuit breaker patterns" --threshold 0.8Lower thresholds return more results but may include less relevant context. Higher thresholds are more precise but may miss some relevant findings.
For programmatic use:
unfault ask "Stability issues" --jsonReturns structured data with:
The --llm flag requires an LLM provider. Configure one with:
unfault config llm openai --model gpt-4oRequires OPENAI_API_KEY environment variable.
unfault config llm anthropic --model claude-sonnet-4-5Requires ANTHROPIC_API_KEY environment variable.
unfault config llm ollama --model llama3.2Requires Ollama running locally.
Check your current configuration:
unfault config llm showHere are questions that work well with ask:
Understanding patterns:
Finding issues:
Tracking progress:
Planning work:
The ask command requires:
unfault review first to index your codebase.unfault login if you haven’t already.--llm synthesized answers.