Skip to content

CLI Usage Guide

The Maricusco CLI provides an interactive terminal interface for analyzing stocks and making trading decisions using the multi-agent system.

Installation

The CLI is installed automatically when you install the Maricusco package:

# Install from source
uv sync --locked

# Verify installation
maricusco --help

Basic Usage

Interactive Mode

Run the CLI without arguments to enter interactive mode:

# Start interactive CLI
maricusco

# Or with Python module syntax
python -m cli.main

The interactive mode will prompt you for:

  1. Ticker Symbol: Stock ticker to analyze (e.g., AAPL, MSFT, TSLA)
  2. Analysis Date: Date for analysis (YYYY-MM-DD format)
  3. Analyst Selection: Which analysts to run (Technical, Fundamentals, Sentiment, News)
  4. LLM Provider: Which LLM to use (OpenAI, Anthropic, Google, Ollama, OpenRouter)
  5. LLM Model: Specific model to use
  6. Research Depth: Number of debate rounds

Command-Line Arguments

# Show help
maricusco --help

# Show version
maricusco --version

# Run with specific options (future enhancement)
maricusco --ticker AAPL --date 2024-12-01 --analysts technical,fundamentals

Interactive Prompts

1. Ticker Selection

Enter stock ticker symbol (e.g., AAPL, MSFT, TSLA): AAPL
  • Enter a valid stock ticker symbol
  • Case-insensitive (AAPL = aapl)
  • Must be available in configured data vendors

2. Date Selection

Enter analysis date (YYYY-MM-DD) or press Enter for today: 2024-12-01
  • Format: YYYY-MM-DD (e.g., 2024-12-01)
  • Press Enter to use current date
  • Date must be valid and not in the future

3. Analyst Selection

Select analysts to run:
  [x] Technical Analyst
  [x] Fundamentals Analyst
  [x] Sentiment Analyst
  [x] News Analyst
  • Use arrow keys to navigate
  • Press Space to toggle selection
  • Press Enter to confirm
  • At least one analyst must be selected

Analyst Descriptions:

  • Technical Analyst: Analyzes price patterns, trends, and technical indicators (MACD, RSI, moving averages)
  • Fundamentals Analyst: Evaluates company financials and performance metrics (P/E ratio, EPS, revenue)
  • Sentiment Analyst: Analyzes social media and public sentiment from Reddit, Twitter, news sources
  • News Analyst: Monitors global news and macroeconomic indicators

4. LLM Provider Selection

Select LLM provider:
  > OpenAI
    Anthropic
    Google
    Ollama
    OpenRouter
  • Use arrow keys to navigate
  • Press Enter to select
  • Requires corresponding API key in environment

Provider Requirements:

Provider API Key Environment Variable Notes
OpenAI OPENAI_API_KEY GPT-4, GPT-3.5-turbo
Anthropic ANTHROPIC_API_KEY Claude models
Google GOOGLE_API_KEY Gemini models
Ollama None Local models, requires Ollama server
OpenRouter OPENROUTER_API_KEY Access to multiple models

5. Model Selection

Select model:
  > gpt-4
    gpt-3.5-turbo
    gpt-4-turbo-preview
  • Available models depend on selected provider
  • Use arrow keys to navigate
  • Press Enter to select

Recommended Models:

  • GPT-4: Best quality, higher cost
  • GPT-3.5-turbo: Good balance of quality and cost
  • Claude-3-opus: Excellent reasoning, comparable to GPT-4
  • Claude-3-sonnet: Good balance, faster than opus
  • Gemini-pro: Google's flagship model

6. Research Depth

Select research depth (debate rounds):
  > Quick (1 round)
    Standard (2 rounds)
    Deep (3 rounds)
  • Quick (1 round): Fast analysis, less thorough
  • Standard (2 rounds): Balanced analysis (recommended)
  • Deep (3 rounds): Comprehensive analysis, slower

More rounds = more thorough analysis but higher cost and longer execution time.

Output

Progress Display

The CLI displays real-time progress during execution:

🚀 Starting Maricusco Trading Analysis

📊 Configuration:
  Ticker: AAPL
  Date: 2024-12-01
  Analysts: Technical, Fundamentals, Sentiment, News
  LLM: OpenAI (gpt-4)
  Debate Rounds: 2
  Execution: Parallel

⏳ Running analysts...
  ✓ Technical Analyst (2.3s)
  ✓ Fundamentals Analyst (1.8s)
  ✓ Sentiment Analyst (1.5s)
  ✓ News Analyst (2.1s)

⏳ Running research debate...
  ✓ Bull Researcher (3.2s)
  ✓ Bear Researcher (3.4s)
  ✓ Research Manager (2.8s)

⏳ Running trader analysis...
  ✓ Trader Agent (4.1s)

⏳ Running risk assessment...
  ✓ Risk Debate (5.3s)
  ✓ Risk Manager (3.7s)

✅ Analysis complete! (25.2s total)

Report Generation

Reports are saved to data/results/<TICKER>/<DATE>/reports/:

data/results/AAPL/2024-12-01/reports/
├── technical_report.md
├── fundamentals_report.md
├── sentiment_report.md
├── news_report.md
├── investment_plan.md
├── trader_investment_plan.md
└── final_trade_decision.md

Final Decision Display

The CLI displays the final trading decision:

╔══════════════════════════════════════════════════════════════╗
║                    TRADING DECISION                          ║
╚══════════════════════════════════════════════════════════════╝

Ticker: AAPL
Date: 2024-12-01
Decision: BUY

Recommendation:
Strong buy signal based on technical breakout, solid fundamentals,
and positive sentiment. Entry at $175.50 with target $185.00 and
stop-loss at $170.00.

Risk Assessment: MODERATE
Position Size: 2% of portfolio
Confidence: HIGH

Reports saved to: data/results/AAPL/2024-12-01/reports/

Mock Mode

Run the CLI without API costs using mock mode:

# Enable mock mode
export MARICUSCO_MOCK_MODE=true

# Run CLI
maricusco

Mock mode provides: - Predefined realistic responses for all agents - No API calls or costs - Instant execution (no network latency) - Deterministic results for testing

See Mock Mode documentation for complete details.

Configuration

Environment Variables

Configure the CLI behavior using environment variables:

# Mock Mode
export MARICUSCO_MOCK_MODE=true
export MARICUSCO_MOCK_RESPONSES=/path/to/custom-responses.json
export MARICUSCO_MOCK_AGENT_DELAYS_MS="technical:800,sentiment:400"

# Logging
export MARICUSCO_LOG_LEVEL=DEBUG
export MARICUSCO_LOG_FORMAT=human-readable
export MARICUSCO_LOG_CONSOLE=true
export MARICUSCO_LOG_FILE_ENABLED=false

# Data Directories
export MARICUSCO_RESULTS_DIR=data/results
export MARICUSCO_DATA_DIR=data

# LLM Configuration
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
export GOOGLE_API_KEY=your_key_here

# Data Vendors
export ALPHA_VANTAGE_API_KEY=your_key_here

Configuration File

Default configuration is in maricusco/config/settings.py:

DEFAULT_CONFIG = {
    "parallel_execution": True,
    "max_debate_rounds": 1,  # Default: 1
    "max_risk_discuss_rounds": 1,  # Default: 1
    "llm_provider": "openai",
    "deep_think_llm": "o4-mini",
    "quick_think_llm": "gpt-4o-mini",
    "backend_url": "https://api.openai.com/v1",
    "llm_temperature": 0.7,
    "data_vendors": {
        "core_stock_apis": "yfinance",
        "technical_indicators": "yfinance",
        "fundamental_data": "alpha_vantage",
        "news_data": "alpha_vantage",
    },
}

Advanced Usage

Custom Analyst Selection

Run with specific analysts only:

from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG

config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True

# Run with only technical and fundamentals analysts
graph = MaricuscoGraph(
    selected_analysts=["technical", "fundamentals"],
    config=config
)

final_state, decision = graph.propagate("AAPL", "2024-12-01")
print(f"Decision: {decision}")

Custom LLM Configuration

from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG

config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "anthropic"
config["llm_model"] = "claude-3-opus-20240229"
config["llm_temperature"] = 0.5
config["max_debate_rounds"] = 3

graph = MaricuscoGraph(config=config)
final_state, decision = graph.propagate("AAPL", "2024-12-01")

Batch Analysis

Analyze multiple tickers programmatically:

from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG

tickers = ["AAPL", "MSFT", "TSLA", "GOOGL"]
date = "2024-12-01"

config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True  # Use mock mode for batch analysis

results = {}
for ticker in tickers:
    graph = MaricuscoGraph(config=config)
    final_state, decision = graph.propagate(ticker, date)
    results[ticker] = {
        "decision": decision,
        "final_plan": final_state.get("final_trade_decision"),
    }

# Display results
for ticker, result in results.items():
    print(f"{ticker}: {result['decision']}")

Troubleshooting

API Key Errors

Error: OpenAI API key not found

Solution:

# Set API key
export OPENAI_API_KEY=your_key_here

# Verify it's set
echo $OPENAI_API_KEY

# Or use mock mode
export MARICUSCO_MOCK_MODE=true

Invalid Ticker Symbol

Error: Ticker 'XYZ' not found

Solution: - Verify ticker symbol is correct - Check ticker is available in configured data vendor - Try a different data vendor

Rate Limit Errors

Error: Rate limit exceeded for Alpha Vantage API

Solution: - Wait for rate limit to reset (typically 1 minute) - Use a different data vendor - Enable caching to reduce API calls - Use mock mode for development

Memory Errors

Error: ChromaDB connection failed

Solution:

# Start ChromaDB service
docker-compose up -d chromadb

# Verify service is running
docker-compose ps chromadb

# Check logs
docker-compose logs chromadb

Slow Execution

Symptoms: Analysis takes >60 seconds

Solutions: 1. Enable parallel execution (default):

config["parallel_execution"] = True

  1. Reduce debate rounds:

    config["max_debate_rounds"] = 1
    config["max_risk_discuss_rounds"] = 1
    

  2. Use faster LLM model:

    config["llm_model"] = "gpt-3.5-turbo"  # Instead of gpt-4
    

  3. Enable caching:

    docker-compose up -d redis
    

  4. Use mock mode for testing:

    export MARICUSCO_MOCK_MODE=true
    

Debug Mode

Enable debug logging for troubleshooting:

# Enable debug logging
export MARICUSCO_LOG_LEVEL=DEBUG
export MARICUSCO_LOG_FORMAT=human-readable

# Run CLI
maricusco

# View logs
tail -f maricusco/logs/maricusco.log

Performance Tips

Optimize for Speed

config = {
    "parallel_execution": True,        # 2-4x speedup
    "max_debate_rounds": 1,            # Reduce debate rounds
    "max_risk_discuss_rounds": 1,
    "llm_model": "gpt-3.5-turbo",     # Faster model
    "llm_temperature": 0.5,            # Lower temperature = faster
}

Optimize for Quality

config = {
    "parallel_execution": True,        # Still use parallel for speed
    "max_debate_rounds": 3,            # More thorough debate
    "max_risk_discuss_rounds": 2,
    "llm_model": "gpt-4",              # Best quality model
    "llm_temperature": 0.7,            # Higher temperature = more creative
}

Optimize for Cost

config = {
    "mock_mode": True,                 # No API costs
    # Or use cheaper models:
    "llm_model": "gpt-3.5-turbo",
    "max_debate_rounds": 1,
}

Examples

Usage Examples

Quick Analysis (Mock Mode):

export MARICUSCO_MOCK_MODE=true
maricusco
# Select: AAPL, today's date, all analysts, quick depth

Production Analysis:

export OPENAI_API_KEY=your_key_here
maricusco
# Select: TSLA, specific date, all analysts, standard depth

Next Steps