CLI Usage Guide¶
The Maricusco CLI provides an interactive terminal interface for analyzing stocks and making trading decisions using the multi-agent system.
Installation¶
The CLI is installed automatically when you install the Maricusco package:
Basic Usage¶
Interactive Mode¶
Run the CLI without arguments to enter interactive mode:
The interactive mode will prompt you for:
- Ticker Symbol: Stock ticker to analyze (e.g., AAPL, MSFT, TSLA)
- Analysis Date: Date for analysis (YYYY-MM-DD format)
- Analyst Selection: Which analysts to run (Technical, Fundamentals, Sentiment, News)
- LLM Provider: Which LLM to use (OpenAI, Anthropic, Google, Ollama, OpenRouter)
- LLM Model: Specific model to use
- Research Depth: Number of debate rounds
Command-Line Arguments¶
# Show help
maricusco --help
# Show version
maricusco --version
# Run with specific options (future enhancement)
maricusco --ticker AAPL --date 2024-12-01 --analysts technical,fundamentals
Interactive Prompts¶
1. Ticker Selection¶
- Enter a valid stock ticker symbol
- Case-insensitive (AAPL = aapl)
- Must be available in configured data vendors
2. Date Selection¶
- Format: YYYY-MM-DD (e.g., 2024-12-01)
- Press Enter to use current date
- Date must be valid and not in the future
3. Analyst Selection¶
Select analysts to run:
[x] Technical Analyst
[x] Fundamentals Analyst
[x] Sentiment Analyst
[x] News Analyst
- Use arrow keys to navigate
- Press Space to toggle selection
- Press Enter to confirm
- At least one analyst must be selected
Analyst Descriptions:
- Technical Analyst: Analyzes price patterns, trends, and technical indicators (MACD, RSI, moving averages)
- Fundamentals Analyst: Evaluates company financials and performance metrics (P/E ratio, EPS, revenue)
- Sentiment Analyst: Analyzes social media and public sentiment from Reddit, Twitter, news sources
- News Analyst: Monitors global news and macroeconomic indicators
4. LLM Provider Selection¶
- Use arrow keys to navigate
- Press Enter to select
- Requires corresponding API key in environment
Provider Requirements:
| Provider | API Key Environment Variable | Notes |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
GPT-4, GPT-3.5-turbo |
| Anthropic | ANTHROPIC_API_KEY |
Claude models |
GOOGLE_API_KEY |
Gemini models | |
| Ollama | None | Local models, requires Ollama server |
| OpenRouter | OPENROUTER_API_KEY |
Access to multiple models |
5. Model Selection¶
- Available models depend on selected provider
- Use arrow keys to navigate
- Press Enter to select
Recommended Models:
- GPT-4: Best quality, higher cost
- GPT-3.5-turbo: Good balance of quality and cost
- Claude-3-opus: Excellent reasoning, comparable to GPT-4
- Claude-3-sonnet: Good balance, faster than opus
- Gemini-pro: Google's flagship model
6. Research Depth¶
- Quick (1 round): Fast analysis, less thorough
- Standard (2 rounds): Balanced analysis (recommended)
- Deep (3 rounds): Comprehensive analysis, slower
More rounds = more thorough analysis but higher cost and longer execution time.
Output¶
Progress Display¶
The CLI displays real-time progress during execution:
🚀 Starting Maricusco Trading Analysis
📊 Configuration:
Ticker: AAPL
Date: 2024-12-01
Analysts: Technical, Fundamentals, Sentiment, News
LLM: OpenAI (gpt-4)
Debate Rounds: 2
Execution: Parallel
⏳ Running analysts...
✓ Technical Analyst (2.3s)
✓ Fundamentals Analyst (1.8s)
✓ Sentiment Analyst (1.5s)
✓ News Analyst (2.1s)
⏳ Running research debate...
✓ Bull Researcher (3.2s)
✓ Bear Researcher (3.4s)
✓ Research Manager (2.8s)
⏳ Running trader analysis...
✓ Trader Agent (4.1s)
⏳ Running risk assessment...
✓ Risk Debate (5.3s)
✓ Risk Manager (3.7s)
✅ Analysis complete! (25.2s total)
Report Generation¶
Reports are saved to data/results/<TICKER>/<DATE>/reports/:
data/results/AAPL/2024-12-01/reports/
├── technical_report.md
├── fundamentals_report.md
├── sentiment_report.md
├── news_report.md
├── investment_plan.md
├── trader_investment_plan.md
└── final_trade_decision.md
Final Decision Display¶
The CLI displays the final trading decision:
╔══════════════════════════════════════════════════════════════╗
║ TRADING DECISION ║
╚══════════════════════════════════════════════════════════════╝
Ticker: AAPL
Date: 2024-12-01
Decision: BUY
Recommendation:
Strong buy signal based on technical breakout, solid fundamentals,
and positive sentiment. Entry at $175.50 with target $185.00 and
stop-loss at $170.00.
Risk Assessment: MODERATE
Position Size: 2% of portfolio
Confidence: HIGH
Reports saved to: data/results/AAPL/2024-12-01/reports/
Mock Mode¶
Run the CLI without API costs using mock mode:
Mock mode provides: - Predefined realistic responses for all agents - No API calls or costs - Instant execution (no network latency) - Deterministic results for testing
See Mock Mode documentation for complete details.
Configuration¶
Environment Variables¶
Configure the CLI behavior using environment variables:
# Mock Mode
export MARICUSCO_MOCK_MODE=true
export MARICUSCO_MOCK_RESPONSES=/path/to/custom-responses.json
export MARICUSCO_MOCK_AGENT_DELAYS_MS="technical:800,sentiment:400"
# Logging
export MARICUSCO_LOG_LEVEL=DEBUG
export MARICUSCO_LOG_FORMAT=human-readable
export MARICUSCO_LOG_CONSOLE=true
export MARICUSCO_LOG_FILE_ENABLED=false
# Data Directories
export MARICUSCO_RESULTS_DIR=data/results
export MARICUSCO_DATA_DIR=data
# LLM Configuration
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
export GOOGLE_API_KEY=your_key_here
# Data Vendors
export ALPHA_VANTAGE_API_KEY=your_key_here
Configuration File¶
Default configuration is in maricusco/config/settings.py:
DEFAULT_CONFIG = {
"parallel_execution": True,
"max_debate_rounds": 1, # Default: 1
"max_risk_discuss_rounds": 1, # Default: 1
"llm_provider": "openai",
"deep_think_llm": "o4-mini",
"quick_think_llm": "gpt-4o-mini",
"backend_url": "https://api.openai.com/v1",
"llm_temperature": 0.7,
"data_vendors": {
"core_stock_apis": "yfinance",
"technical_indicators": "yfinance",
"fundamental_data": "alpha_vantage",
"news_data": "alpha_vantage",
},
}
Advanced Usage¶
Custom Analyst Selection¶
Run with specific analysts only:
from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG
config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True
# Run with only technical and fundamentals analysts
graph = MaricuscoGraph(
selected_analysts=["technical", "fundamentals"],
config=config
)
final_state, decision = graph.propagate("AAPL", "2024-12-01")
print(f"Decision: {decision}")
Custom LLM Configuration¶
from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "anthropic"
config["llm_model"] = "claude-3-opus-20240229"
config["llm_temperature"] = 0.5
config["max_debate_rounds"] = 3
graph = MaricuscoGraph(config=config)
final_state, decision = graph.propagate("AAPL", "2024-12-01")
Batch Analysis¶
Analyze multiple tickers programmatically:
from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG
tickers = ["AAPL", "MSFT", "TSLA", "GOOGL"]
date = "2024-12-01"
config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True # Use mock mode for batch analysis
results = {}
for ticker in tickers:
graph = MaricuscoGraph(config=config)
final_state, decision = graph.propagate(ticker, date)
results[ticker] = {
"decision": decision,
"final_plan": final_state.get("final_trade_decision"),
}
# Display results
for ticker, result in results.items():
print(f"{ticker}: {result['decision']}")
Troubleshooting¶
API Key Errors¶
Solution:
# Set API key
export OPENAI_API_KEY=your_key_here
# Verify it's set
echo $OPENAI_API_KEY
# Or use mock mode
export MARICUSCO_MOCK_MODE=true
Invalid Ticker Symbol¶
Solution: - Verify ticker symbol is correct - Check ticker is available in configured data vendor - Try a different data vendor
Rate Limit Errors¶
Solution: - Wait for rate limit to reset (typically 1 minute) - Use a different data vendor - Enable caching to reduce API calls - Use mock mode for development
Memory Errors¶
Solution:
# Start ChromaDB service
docker-compose up -d chromadb
# Verify service is running
docker-compose ps chromadb
# Check logs
docker-compose logs chromadb
Slow Execution¶
Symptoms: Analysis takes >60 seconds
Solutions: 1. Enable parallel execution (default):
-
Reduce debate rounds:
-
Use faster LLM model:
-
Enable caching:
-
Use mock mode for testing:
Debug Mode¶
Enable debug logging for troubleshooting:
# Enable debug logging
export MARICUSCO_LOG_LEVEL=DEBUG
export MARICUSCO_LOG_FORMAT=human-readable
# Run CLI
maricusco
# View logs
tail -f maricusco/logs/maricusco.log
Performance Tips¶
Optimize for Speed¶
config = {
"parallel_execution": True, # 2-4x speedup
"max_debate_rounds": 1, # Reduce debate rounds
"max_risk_discuss_rounds": 1,
"llm_model": "gpt-3.5-turbo", # Faster model
"llm_temperature": 0.5, # Lower temperature = faster
}
Optimize for Quality¶
config = {
"parallel_execution": True, # Still use parallel for speed
"max_debate_rounds": 3, # More thorough debate
"max_risk_discuss_rounds": 2,
"llm_model": "gpt-4", # Best quality model
"llm_temperature": 0.7, # Higher temperature = more creative
}
Optimize for Cost¶
config = {
"mock_mode": True, # No API costs
# Or use cheaper models:
"llm_model": "gpt-3.5-turbo",
"max_debate_rounds": 1,
}
Examples¶
Usage Examples¶
Quick Analysis (Mock Mode):
Production Analysis:
export OPENAI_API_KEY=your_key_here
maricusco
# Select: TSLA, specific date, all analysts, standard depth
Next Steps¶
- Read Configuration Reference for detailed configuration options
- Read Mock Mode for cost-free development
- Read Architecture to understand the system
- Read Developer Onboarding for development setup