Skip to content

Ask Questions

The unfault ask command lets you query your project’s health using natural language. It searches through past reviews and findings to answer questions about patterns, issues, and trends.

When you run unfault review, the analysis results are indexed. The ask command uses retrieval-augmented generation (RAG) to search that index and find relevant context for your question.

This means ask can answer questions like:

  • “What are my main stability concerns?”
  • “How do we handle database timeouts?”
  • “Which files have the most issues?”
  • “What HTTP clients are used?”
Terminal window
unfault ask "What are my main stability concerns?"

The command returns relevant findings and context from past reviews:

Found 2 instances of missing rate limiting in
src/environment/service.py, src/integration/service.py. Also
found: cpu in event loop (2), missing retry (2).
→ src/plan/providers/github.py:87 missing retry (+1 more)
87 │ r = await h.put(
→ src/plan/providers/github.py:87 missing circuit breaker (+1 more)
87 │ r = await h.put(
→ src/plan/crud.py:70 cpu in event loop (+1 more)
70 │ plan_dict = orjson.loads(new_plan.model_dump_json())

By default, ask returns raw context. For a more conversational answer, use the --llm flag:

Terminal window
unfault ask "How do we handle database timeouts?" --llm

This sends the retrieved context to your configured LLM and returns a synthesized response:

Based on the analysis of your codebase:
Your database connections currently don't specify explicit timeouts.
The SQLAlchemy engine in src/core/database.py uses default settings,
which means connections can hang indefinitely if the database becomes
unresponsive.
Three files create database sessions:
- src/core/database.py (engine creation)
- src/api/deps.py (session dependency)
- src/workers/tasks.py (background worker sessions)
Suggested fix: Add pool_timeout and connect_args to your engine:
create_engine(
DATABASE_URL,
pool_timeout=30,
connect_args={"connect_timeout": 10}
)

Focus on a specific part of the codebase:

Terminal window
unfault ask "Performance issues" --path src/api

If you work with multiple projects, scope to a specific workspace:

Terminal window
unfault ask "Recent issues" --workspace wks_abc123

Control how much context is retrieved:

Terminal window
# More finding context (up to 50)
unfault ask "Error handling" --max-findings 30
# More session context (up to 20)
unfault ask "Recent trends" --max-sessions 10

Increase the similarity threshold for more precise matches:

Terminal window
# Default is 0.5, higher means stricter
unfault ask "Circuit breaker patterns" --threshold 0.8

Lower thresholds return more results but may include less relevant context. Higher thresholds are more precise but may miss some relevant findings.

For programmatic use:

Terminal window
unfault ask "Stability issues" --json

Returns structured data with:

  • Query metadata
  • Retrieved contexts with similarity scores
  • Source sessions and findings

The --llm flag requires an LLM provider. Configure one with:

Terminal window
unfault config llm openai --model gpt-4o

Requires OPENAI_API_KEY environment variable.

Check your current configuration:

Terminal window
unfault config llm show

Here are questions that work well with ask:

Understanding patterns:

  • “What HTTP clients are used in this codebase?”
  • “How is authentication handled?”
  • “What’s the error handling strategy?”

Finding issues:

  • “What are the main stability concerns?”
  • “Which files have the most findings?”
  • “Are there any blocking calls in async code?”

Tracking progress:

  • “What issues were fixed recently?”
  • “Are findings increasing or decreasing?”
  • “What’s the trend for high-severity issues?”

Planning work:

  • “What should I focus on for stability?”
  • “Which areas need the most attention?”
  • “What patterns keep coming up?”

The ask command requires:

  1. At least one prior review. Run unfault review first to index your codebase.
  2. Authentication. Run unfault login if you haven’t already.
  3. LLM configuration (optional). Only needed for --llm synthesized answers.

CLI Reference

Full option reference. Read more

Explore the Code Graph

Query dependencies and impact. Read more

Use with AI Agents

Integrate with coding assistants. Read more