Mock Mode for Agent Testing¶
Cost-Free Development
Mock Mode allows you to run the entire Maricusco trading framework without making any LLM API calls, eliminating costs during development and testing.
Overview¶
Mock Mode is a comprehensive testing and development feature that replaces expensive LLM API calls with realistic, predefined responses. This enables:
- Zero API costs during development
- Instant responses without network latency
- Offline development without internet connectivity
- Deterministic testing with consistent outputs
- CI/CD compatibility without requiring API keys
Quick Start¶
The simplest way to enable mock mode:
Enable mock mode programmatically:
from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG
# Create config with mock mode enabled
config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True
# Initialize graph - uses mock LLMs
graph = MaricuscoGraph(config=config)
# Run analysis without API calls
final_state, decision = graph.propagate("AAPL", "2024-12-01")
Development Best Practice
Always develop with mock mode enabled by default. Only disable it when you need to validate with real LLM APIs.
What Gets Mocked¶
LLM Calls¶
All agent LLM interactions are replaced with realistic, predefined responses:
| Agent Type | Mock Response Includes |
|---|---|
| Technical Analyst | Moving averages, RSI, MACD, volume analysis, support/resistance levels, summary tables |
| Fundamentals Analyst | P/E ratio, EPS, revenue, market cap, financial health metrics, growth rates |
| Sentiment Analyst | Social media sentiment scores, platform breakdowns, key themes, sentiment distribution |
| News Analyst | Recent headlines, source attribution, impact assessment, overall sentiment |
| Bull Researcher | Growth potential arguments, competitive advantages, positive indicators |
| Bear Researcher | Risk concerns, valuation issues, competitive threats, bearish signals |
| Research Manager | Synthesized recommendations, consensus view, key considerations |
| Risk Manager | Portfolio impact, risk metrics, mitigation strategies, position sizing |
| Trader | Final decisions (BUY/SELL/HOLD), entry/exit parameters, risk controls |
Memory & Embeddings¶
The memory system is also mocked to avoid embedding API calls:
- Uses keyword matching instead of semantic search
- Persists to local JSON files instead of ChromaDB
- Can be preloaded with example trading scenarios
- No API calls to embedding services
Architecture¶
graph TD
A[MaricuscoGraph Init] --> B{Check mock_mode config?}
B -->|True| C[Create FakeLLM instances]
B -->|False| D[Create Real LLM instances]
C --> E[Create MockMemory instances]
D --> F[Create Real Memory instances]
E --> G[Run Analysis]
F --> G
G --> H{Mock Mode Active?}
H -->|Yes| I["Return Predefined Responses<br/>Instant - Zero Cost"]
H -->|No| J["Call Real APIs<br/>Network Delay - API Costs"]
Configuration Options¶
Core Settings¶
config = {
# Enable/disable mock mode
"mock_mode": False, # Set to True or use MARICUSCO_MOCK_MODE=true
# Optional: Path to custom responses JSON file
"mock_llm_responses_file": None,
# Whether to preload example memories
"mock_memory_preloaded": True,
}
Environment Variables¶
# Enable mock mode
export MARICUSCO_MOCK_MODE=true
# Optional: Custom responses file
export MARICUSCO_MOCK_RESPONSES=/path/to/custom-responses.json
Usage Examples¶
Basic Testing¶
import pytest
from maricusco.orchestration.trading_graph import MaricuscoGraph
from maricusco.config.settings import DEFAULT_CONFIG
def test_trading_analysis():
"""Test complete trading analysis in mock mode."""
config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True
config["max_debate_rounds"] = 1
graph = MaricuscoGraph(
selected_analysts=["technical", "fundamentals"],
config=config,
)
final_state, decision = graph.propagate("AAPL", "2024-12-01")
assert decision in ["BUY", "SELL", "HOLD"]
assert final_state["final_trade_decision"] is not None
Testing Individual Agents¶
from maricusco.agents.utils.mock_llm import FakeLLM
from maricusco.agents.researchers.bull_researcher import create_bull_researcher
from maricusco.agents.utils.mock_memory import create_mock_memory
def test_bull_researcher():
"""Test bull researcher with mock components."""
# Create mock LLM and memory
config = {"data_cache_dir": "/tmp/test_cache"}
mock_llm = FakeLLM(agent_type="bull_researcher")
mock_memory = create_mock_memory("test_bull", config, preloaded=True)
# Create agent
bull_agent = create_bull_researcher(mock_llm, mock_memory)
# Test state
state = {
"investment_debate_state": {
"history": "",
"bull_history": "",
"bear_history": "",
"current_response": "",
"count": 0,
},
"technical_report": "Strong uptrend, RSI at 65",
"sentiment_report": "Positive sentiment",
"news_report": "Strong earnings announced",
"fundamentals_report": "Solid financials",
}
# Execute
result = bull_agent(state)
# Verify
assert "Bull Analyst:" in result["investment_debate_state"]["current_response"]
assert result["investment_debate_state"]["count"] == 1
Varied Responses for Debates¶
For multi-round debates, use FakeLLMWithVariation to cycle through different responses:
from maricusco.agents.utils.mock_llm import FakeLLMWithVariation
# Define different responses for each round
variations = [
"Round 1: Initial bullish argument focusing on growth...",
"Round 2: Addressing bear concerns with data...",
"Round 3: Final bull position with risk mitigation...",
]
mock_llm = FakeLLMWithVariation(
agent_type="bull_researcher",
variations=variations,
)
# Each invoke returns the next variation
response1 = mock_llm.invoke("Debate round 1")
response2 = mock_llm.invoke("Debate round 2")
response3 = mock_llm.invoke("Debate round 3")
assert response1.content == variations[0]
assert response2.content == variations[1]
assert response3.content == variations[2]
Custom Responses¶
You can provide your own mock responses via a JSON file:
1. Create Response File¶
Create custom-responses.json:
{
"technical_analyst": {
"content": "## Custom Technical Analysis\n\n**Trend**: Testing custom response\n\n**Indicators**:\n- RSI: 55 (Neutral)\n- MACD: Bullish crossover\n\n**Recommendation**: Monitor for breakout",
"tool_calls": null
},
"bull_researcher": {
"content": "Custom bull argument: Strong momentum and positive fundamentals suggest upside potential...",
"tool_calls": null
},
"bear_researcher": {
"content": "Custom bear argument: Valuation concerns and technical resistance may limit gains...",
"tool_calls": null
}
}
2. Use Custom Responses¶
Mock Memory System¶
The mock memory system provides a lightweight alternative to real embeddings:
Features¶
- Keyword Matching: Uses simple keyword overlap instead of semantic search
- Local Persistence: Stores memories in JSON files
- Preloaded Examples: Comes with 5 realistic trading scenarios
- No API Calls: Zero embedding service usage
Usage¶
from maricusco.agents.utils.mock_memory import create_mock_memory
config = {"data_cache_dir": "/tmp/cache"}
# Create with preloaded examples
memory = create_mock_memory("bull_memory", config, preloaded=True)
# Retrieve relevant memories
memories = memory.get_memories(
"high RSI overbought technical indicator",
n_matches=2
)
for mem in memories:
print(f"Situation: {mem['situation']}")
print(f"Recommendation: {mem['recommendation']}")
print(f"Outcome: {mem['outcome']}")
Add Custom Memories¶
# Add your own trading scenarios
memory.add_memory(
situation="Bullish breakout with increasing volume",
recommendation="Entered position at $150 with stop at $145",
outcome="Stock rallied to $165, took profits at target",
metadata={"ticker": "AAPL", "date": "2024-01-15", "gain": "10%"}
)
# Later retrieve when similar situation occurs
relevant = memory.get_memories("bullish breakout volume", n_matches=1)
Performance Comparison¶
Speed¶
| Operation | Real Mode | Mock Mode | Speedup |
|---|---|---|---|
| Single LLM Call | 50-100ms | <1ms | 50-100x |
| Technical Analysis | ~10 seconds | ~0.1 seconds | 100x |
| Full Analysis | 30-60 seconds | 1-2 seconds | 30-60x |
| Test Suite (100 tests) | Hours | Seconds | >100x |
Cost¶
| Activity | Real Mode | Mock Mode | Savings |
|---|---|---|---|
| Single Analysis | $0.10 - $1.00 | $0.00 | 100% |
| 100 Dev Iterations | $10 - $100 | $0.00 | 100% |
| Daily Development | $50 - $500 | $0.00 | 100% |
CI/CD Integration¶
Mock mode is perfect for continuous integration:
GitHub Actions Example¶
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.12'
- name: Install dependencies
run: |
uv sync --locked --dev
- name: Run tests in mock mode
env:
MARICUSCO_MOCK_MODE: true
run: |
pytest -v --cov=maricusco
No API Keys Required
When using mock mode in CI/CD, you don't need to configure any LLM API keys as secrets.
Testing Strategies¶
Progressive Testing Approach¶
graph LR
A["Unit Tests<br/>Always Mock"] --> B["Integration Tests<br/>Mock by Default"]
B --> C["E2E Tests<br/>Selective Real APIs"]
C --> D["Production<br/>Real APIs Only"]
style A fill:#4CAF50,stroke:#2E7D32,stroke-width:2px
style B fill:#8BC34A,stroke:#558B2F,stroke-width:2px
style C fill:#FFC107,stroke:#F57F17,stroke-width:2px
style D fill:#FF5722,stroke:#D84315,stroke-width:2px
Pytest Fixtures¶
Create reusable fixtures for common mock setups:
import pytest
from maricusco.config.settings import DEFAULT_CONFIG
from maricusco.orchestration.trading_graph import MaricuscoGraph
@pytest.fixture
def mock_config():
"""Fixture providing a config with mock mode enabled."""
config = DEFAULT_CONFIG.copy()
config["mock_mode"] = True
config["max_debate_rounds"] = 1
config["max_risk_discuss_rounds"] = 1
return config
@pytest.fixture
def mock_graph(mock_config):
"""Fixture providing a MaricuscoGraph in mock mode."""
return MaricuscoGraph(
selected_analysts=["technical", "fundamentals"],
config=mock_config,
)
# Use fixtures in tests
def test_with_mock_graph(mock_graph):
"""Test using the pre-configured mock graph."""
final_state, decision = mock_graph.propagate("AAPL", "2024-12-01")
assert decision in ["BUY", "SELL", "HOLD"]
Best Practices¶
Development Workflow
- Start with mocks - Enable
MARICUSCO_MOCK_MODE=trueby default - Iterate quickly - Make changes without cost concerns
- Test thoroughly - Run full test suite with mocks
- Validate selectively - Test critical paths with real APIs occasionally
- Production deployment - Disable mock mode for live trading
Limitations
- Mock responses are predefined, not based on real market analysis
- Memory keyword matching is simpler than semantic search
- Won't validate API key configuration or rate limits
- Should not be used for actual trading decisions
When to Use Mocks
- ✅ During active development
- ✅ Running automated tests
- ✅ CI/CD pipelines
- ✅ Learning the framework
- ✅ Debugging agent logic
- ✅ Performance testing
When to Use Real APIs
- ❌ Production trading
- ❌ Final validation before deployment
- ❌ Testing API integration
- ❌ Validating API key configuration
- ❌ Actual market analysis
Troubleshooting¶
Mock Mode Not Activating¶
Check that the environment variable is set:
# Verify environment variable
echo $MARICUSCO_MOCK_MODE # Should output: true
# If not set, export it
export MARICUSCO_MOCK_MODE=true
Verify in code:
import os
print(os.getenv("MARICUSCO_MOCK_MODE")) # Should print: 'true'
# Or check config
from maricusco.config.settings import DEFAULT_CONFIG
print(DEFAULT_CONFIG.get("mock_mode")) # Should be True
Still Seeing API Calls¶
Ensure mock mode is actually enabled:
from maricusco.orchestration.trading_graph import MaricuscoGraph
# When initializing the graph, you should see:
# "🧪 Running in MOCK MODE - No API calls will be made"
graph = MaricuscoGraph(config=config)
# Check the config was applied
assert graph.config["mock_mode"] is True
Custom Responses Not Loading¶
Verify file path and format:
import json
from pathlib import Path
# Check file exists
responses_file = "/path/to/responses.json"
assert Path(responses_file).exists(), f"File not found: {responses_file}"
# Validate JSON format
with open(responses_file) as f:
data = json.load(f)
print(f"Loaded {len(data)} agent responses")
API Reference¶
Mock LLM Classes¶
FakeLLM¶
Basic mock LLM with predefined responses.
from maricusco.agents.utils.mock_llm import FakeLLM
llm = FakeLLM(
agent_type="technical_analyst", # Agent type for response selection
responses_file=None, # Optional custom responses JSON
)
# Use like a real LLM
response = llm.invoke("Analyze AAPL stock")
print(response.content)
print(llm.call_count) # Track number of calls
FakeLLMWithVariation¶
Mock LLM that cycles through multiple responses.
from maricusco.agents.utils.mock_llm import FakeLLMWithVariation
llm = FakeLLMWithVariation(
agent_type="bull_researcher",
variations=["Response 1", "Response 2", "Response 3"],
)
# Each call returns next variation
r1 = llm.invoke("prompt") # "Response 1"
r2 = llm.invoke("prompt") # "Response 2"
r3 = llm.invoke("prompt") # "Response 3"
create_mock_llm()¶
Factory function for creating mock LLMs.
from maricusco.agents.utils.mock_llm import create_mock_llm
llm = create_mock_llm(
agent_type="trader",
use_variation=False,
responses_file=None,
)
Mock Memory Classes¶
MockFinancialSituationMemory¶
Mock memory using keyword matching.
from maricusco.agents.utils.mock_memory import MockFinancialSituationMemory
memory = MockFinancialSituationMemory(
name="bull_memory",
config={"data_cache_dir": "/tmp/cache"}
)
# Add memory
memory.add_memory(
situation="Market condition description",
recommendation="What was recommended",
outcome="What actually happened",
metadata={"ticker": "AAPL"}
)
# Retrieve memories
results = memory.get_memories("similar situation", n_matches=3)
create_mock_memory()¶
Factory function for creating mock memories.
from maricusco.agents.utils.mock_memory import create_mock_memory
memory = create_mock_memory(
name="test_memory",
config={"data_cache_dir": "/tmp/cache"},
preloaded=True, # Include example scenarios
)
Summary¶
Mock Mode enables efficient development with:
- Fast iteration without API costs
- Reliable testing with deterministic outputs
- Zero costs during development
- CI/CD compatibility without API keys
Development principle: Mock by default, validate with real APIs when needed.