CLI Usage¶
The Redhound CLI launches a Textual-based TUI dashboard for analyzing stocks and making trading decisions using the multi-agent system.
Installation¶
The CLI is installed automatically when you install the Redhound package:
Basic Usage¶
Default Command (TUI Dashboard)¶
Running redhound without arguments launches the TUI with a Setup Screen where you configure all parameters interactively:
# Launch the TUI dashboard (interactive setup)
redhound
# Or with Python module syntax
python -m cli.main
Command-Line Options¶
All options are passed directly to the TUI. When enough options are provided (ticker + analysts, or --demo), the setup screen is skipped and the dashboard starts immediately:
# Show help
redhound --help
# Run in mock mode (no API calls)
redhound --mock
# Skip setup entirely with demo mode
redhound --mock --demo
# Skip setup with specific ticker and analysts
redhound --mock --ticker AAPL --analysts technical,sentiment
# Partial options (setup screen is shown for remaining config)
redhound --mock --ticker SPY
CLI Options:
| Option | Short | Description |
|---|---|---|
--ticker |
-t |
Stock ticker to analyze (e.g. AAPL, SPY). |
--date |
-d |
Analysis date in YYYY-MM-DD format. Must not be in the future. |
--analysts |
-a |
Comma-separated: technical, sentiment, news, fundamentals, market_context. |
--demo |
Run with defaults: SPY, today, technical analyst only. | |
--mock |
Use mock LLM responses (no API calls, zero cost). |
TUI Screens¶
Setup Screen¶
The setup screen provides interactive configuration with form controls:
- Ticker Symbol: Text input for the stock ticker
- Analysis Date: Date input in YYYY-MM-DD format
- Analysts: Checkboxes for each analyst type
- Research Depth: Radio buttons (Shallow / Medium / Deep)
- LLM Provider: Dropdown (OpenAI, Anthropic, Google, xAI, OpenRouter, Ollama)
- Model Selection: Quick-thinking and deep-thinking model dropdowns (auto-populated per provider)
- Mock Mode: Checkbox toggle
Press the Start Analysis button to begin.
Dashboard Screen¶
The dashboard provides a live view during analysis with three areas:
Left Panel - Agent Status: Team-grouped list of all agents with status icons:
- Pending (dim circle)
- In Progress (yellow dot)
- Completed (green checkmark)
- Error (red X)
Top Bar - Workflow Graph: Unicode pipeline showing progress through phases:
Analysts --> Signal --> Research --> Risk --> Decision
Center - Tabbed Content (switch with 1, 2, 3 keys):
- Activity - Timestamped, color-coded log stream
- Debates - Chat-style investment and risk debate visualization
- Reports - Live markdown preview of accumulated reports
Bottom Bar - Stats: LLM calls, tool calls, token count, errors, current phase
Report Screen¶
After analysis completes, press r to open the report viewer:
- Left sidebar: Section navigation (Technical, Sentiment, News, Fundamentals, Market Context, Research Decision, Trading Plan, Final Decision)
- Right panel: Full markdown rendering of the selected section
- Press
sto save the complete report toreports/<TICKER>/<DATE>/
Key Bindings¶
| Key | Context | Action |
|---|---|---|
q |
Global | Quit the application |
d |
Global | Switch to dashboard screen |
r |
Global | Switch to reports screen |
s |
Global | Start a new session |
c |
Dashboard | Cancel running analysis |
1 |
Dashboard | Show Activity tab |
2 |
Dashboard | Show Debates tab |
3 |
Dashboard | Show Reports tab |
j |
Report | Next section |
k |
Report | Previous section |
s |
Report | Save report to disk |
Escape |
Any screen | Go back |
Mock Mode¶
Run the CLI without API costs using mock mode:
# Quick mock demo (skips setup)
redhound --mock --demo
# Mock mode with setup screen
redhound --mock
# Or via environment variable
export REDHOUND_MOCK_MODE=true
redhound
Mock mode provides:
- Predefined realistic responses for all agents
- No API calls or costs
- Instant execution (no network latency)
- Deterministic results for testing
Deterministic analysts (Technical, Fundamentals, Market Context) run unchanged in mock mode — they make no LLM calls regardless.
See Mock Mode documentation for complete details.
Configuration¶
Environment Variables¶
Configure behavior using environment variables:
# Mock Mode
export REDHOUND_MOCK_MODE=true
# Logging
export REDHOUND_LOG_LEVEL=DEBUG
# LLM API Keys
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
export GOOGLE_API_KEY=your_key_here
Provider Requirements:
| Provider | API Key Environment Variable | Notes |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
GPT-4, GPT-5, o-series |
| Anthropic | ANTHROPIC_API_KEY |
Claude models |
GOOGLE_API_KEY |
Gemini models | |
| xAI | XAI_API_KEY |
Grok models |
| OpenRouter | OPENROUTER_API_KEY |
Access to multiple models |
| Ollama | None | Local models, requires Ollama server |
Configuration File¶
Default configuration is in backend/config/settings.py:
DEFAULT_CONFIG = {
"max_debate_rounds": 1,
"max_risk_discuss_rounds": 1,
"llm_provider": "openai",
"deep_think_llm": "o4-mini",
"quick_think_llm": "gpt-4o-mini",
"backend_url": "https://api.openai.com/v1",
"llm_temperature": 0.7,
}
Troubleshooting¶
API Key Errors¶
Terminal Too Small¶
The TUI requires a minimum terminal size. If widgets appear broken, resize your terminal to at least 120 columns by 30 rows.
Debug Mode¶
Next Steps¶
- Read Configuration Reference for detailed configuration options
- Read Mock Mode for cost-free development
- Read Architecture to understand the system
- Read Developer Onboarding for development setup