System Architecture¶
Overview¶
Redhound is a multi-agent trading system built on LangGraph that orchestrates specialized AI agents to analyze market data and make informed trading decisions. The architecture follows a modular monolith pattern with clear separation of concerns, enabling maintainability and future microservices migration.
Technology Stack¶
| Category | Technology | Purpose |
|---|---|---|
| Language & Runtime | Python | Core programming language |
| uv | Package manager and dependency resolver | |
| Web Framework | FastAPI | REST API framework |
| Uvicorn | ASGI server | |
| Pydantic | Data validation | |
| CLI / TUI | typer | CLI argument parsing and command routing |
| textual | Interactive full-screen terminal dashboard | |
| rich | Terminal formatting and output | |
| Orchestration | LangGraph | Multi-agent workflow orchestration |
| LangChain | LLM integration framework | |
| LLM Providers | OpenAI | GPT models |
| Anthropic | Claude models | |
| Google Gemini | Gemini models | |
| xAI | Grok models | |
| Ollama | Local LLM hosting | |
| OpenRouter | Unified LLM API | |
| Data Sources | FMP | Stock price data (default, requires API key) |
| Alpha Vantage | Financial data API (optional) | |
| Google News | News aggregation (free) | |
| Reddit PRAW | Social sentiment (optional) | |
| FRED API | Economic indicators (free) | |
| Technical Analysis | pandas-ta | Technical indicators library |
| TA-Lib | Candlestick pattern recognition | |
| Fundamental Analysis | FMP | Financial ratios, valuation, fundamental metrics |
| Sentiment Analysis | DeBERTa-v3 | Fine-tuned transformer for financial news sentiment |
| Databases | PostgreSQL | Relational database |
| TimescaleDB | Time-series extension | |
| Redis | Caching layer | |
| pgvector | Vector operations | |
| Data persistence | backend.database |
SQLAlchemy ORM models and pgvector types; see Data & Database Overview for data flow and storage layout, and pgvector Setup for package structure and usage |
| Monitoring | Prometheus | Metrics collection |
| Grafana | Metrics visualization | |
| structlog | Structured logging | |
| Containerization | Docker | Container runtime |
| Docker Compose | Multi-container orchestration | |
| CI/CD | GitHub Actions | Continuous integration |
| Development Tools | Ruff | Linter and formatter |
| Pyright | Type checker | |
| Pytest | Testing framework | |
| Pre-commit | Git hooks |
High-Level Architecture¶
flowchart TB
subgraph "User Interface Layer"
CLI[CLI Interface]
API[FastAPI REST API]
end
subgraph "Orchestration Layer"
TG[Trading Graph<br/>LangGraph Workflow]
CL[Conditional Logic]
PR[Propagation]
end
subgraph "Agent Layer"
subgraph "Core Analysts (Parallel)"
TA[Technical Analyst]
FA[Fundamentals Analyst]
SA[Sentiment Analyst]
NA[News Analyst]
MCA[Market Context Analyst]
end
subgraph "Edge Analysts (Optional)"
SEC[Sector Analyst]
INS[Insider Analyst]
OFA[Options Flow Analyst]
SIA[Short Interest Analyst]
ERA[Earnings Revisions Analyst]
end
subgraph "Signal Processing"
SAG[Signal Aggregator]
CG{Confidence Gate}
end
subgraph "Research Team (Debate)"
BR[Bull Researcher]
BER[Bear Researcher]
RM[Research Manager]
end
subgraph "Risk Strategy"
RO[Risk Overlay<br/>(Deterministic)]
end
end
subgraph "Service Layer"
SS[Signal Service]
MDS[Market Data Service]
AS[Analytics Service]
SES[Session Service]
BS[Base Service]
end
subgraph "Data Layer"
DI[Data Interface]
CACHE[Cache Client]
MEM[Memory System]
subgraph "Data Vendors"
FMP[FMP]
AV[Alpha Vantage]
GN[Google News]
RD[Reddit]
LOC[Local Data]
end
end
subgraph "Infrastructure Layer"
PG[(PostgreSQL<br/>TimescaleDB<br/>pgvector)]
RDS[(Redis Cache)]
PROM[Prometheus]
GRAF[Grafana]
end
CLI --> TG
API --> SS
API --> MDS
API --> AS
API --> SES
TG --> CL
TG --> PR
CL --> TA
CL --> FA
CL --> SA
CL --> NA
CL --> MCA
CL -.-> SEC
CL -.-> INS
CL -.-> OFA
CL -.-> SIA
CL -.-> ERA
TA & FA & SA & NA & MCA --> SAG
SEC & INS & OFA & SIA & ERA -.-> SAG
SAG --> CG
CG -- "Medium/Low" --> BR
CG -- "Medium/Low" --> BER
CG -- "High" --> RM
BR --> RM
BER --> RM
RM --> RO
TA --> DI
FA --> DI
SA --> DI
NA --> DI
MCA --> DI
BR --> MEM
BER --> MEM
RM --> MEM
RO --> MEM
SS --> BS
MDS --> BS
AS --> BS
SES --> BS
SS --> PG
MDS --> DI
AS --> MDS
SES --> PG
DI --> CACHE
DI --> YF
DI --> AV
DI --> GN
DI --> RD
DI --> LOC
CACHE --> RDS
MEM --> PG
BS --> RDS
TG -. metrics .-> PROM
PROM --> GRAF
style CLI fill:#7A9A7A,stroke:#6B8E6B,color:#fff
style API fill:#7A9A7A,stroke:#6B8E6B,color:#fff
style TG fill:#7A9FB3,stroke:#6B8FA3,color:#fff
style SS fill:#B8A082,stroke:#A89072,color:#fff
style MDS fill:#B8A082,stroke:#A89072,color:#fff
style AS fill:#B8A082,stroke:#A89072,color:#fff
style SES fill:#B8A082,stroke:#A89072,color:#fff
style DI fill:#C4A484,stroke:#B49474,color:#fff
style PG fill:#9B8AAB,stroke:#8B7A9B,color:#fff
style RDS fill:#9B8AAB,stroke:#8B7A9B,color:#fff
Architecture Layers¶
1. User Interface Layer¶
Web Application (frontend/)¶
A Next.js 16 App Router application hosted at redhound.vercel.app.
Pages and features:
| Page | Route | Description |
|---|---|---|
| Sessions | /sessions |
Launch and monitor multi-agent analysis sessions; view real-time agent reasoning feed |
| Session Detail | /sessions/[id] |
Full analysis report, agent status grid, intelligence feed |
| Stock Profile | /stocks/[symbol] |
Price, fundamentals, technical indicators, and recent signals for any ticker |
| Screener | /screener |
Filter builder with live results — screen S&P 500 / NASDAQ 100 by technical and fundamental conditions |
| Portfolio | /portfolio |
Holdings, P&L, allocation, and risk overview |
| Watchlists | /watchlist |
Multiple watchlists with price sparklines and one-click analysis |
| Alerts | /alerts |
Price, technical, fundamental, and signal alerts with in-app delivery |
| Scanner | /scanner |
Configure and manage background market scanners |
| Opportunities | /opportunities |
Detected opportunities feed with filters |
| Monitor | /monitor |
Scanner status, scan history, and API usage |
| Backtest | /backtest |
Strategy builder, historical simulation, equity curve, trade log |
| Analytics | /analytics |
Usage dashboard, token/cost tracking, session history |
Technical details:
- Framework: Next.js 16 App Router with TypeScript
- Auth: Supabase with @supabase/ssr; JWTs signed with ES256
- API proxy: frontend/src/middleware.ts injects Authorization headers and proxies all /api/* calls to the FastAPI backend
- Real-time: WebSocket connections to backend/api/routes/sessions.py (session events) and backend/api/routes/price_feed.py (live prices)
- Deployment: Vercel (frontend), backend on any ASGI host
CLI Interface (cli/, backend/tui/)¶
The system provides two complementary interfaces:
CLI Entry Point (cli/main.py) — powered by typer:
- Command-line argument parsing (--ticker, --date, --analysts, --mock, --demo)
- Launches the TUI dashboard for interactive use
- Supports headless mode via redhound analyze for scripted/CI use
Textual TUI (backend/tui/) — powered by textual:
- Full-screen interactive dashboard with live agent workflow visualization
- Workflow progress bar: Analysts → Signal → Research → Risk → Decision
- Real-time streaming of agent reasoning, tool calls, and reports
- Setup screen for ticker/date/analyst selection
- Dashboard screen with live agent status panels
- Reports screen for browsing final analysis
- Flicker-free rendering; responsive to terminal size
- Mock mode: deterministic analysts run at full fidelity; only LLM calls are stubbed
FastAPI REST API (backend/api/)¶
- Comprehensive REST API with service layer integration
- Health check endpoint (
/health) with dependency validation - Metrics endpoint (
/metrics) for Prometheus scraping - Signal management endpoints (
/api/v1/signals) - Market data endpoints (
/api/v1/market-data) - Analytics endpoints (
/api/v1/analytics) - Session management endpoints (
/api/v1/sessions) with WebSocket support - Asynchronous request handling with
uvicorn - Request validation, error handling, and rate limiting
2. Orchestration Layer (backend/orchestration/)¶
The orchestration layer manages agent workflow execution using LangGraph.
Trading Graph (trading_graph.py)¶
- Defines the complete trading workflow as a LangGraph state machine
- Manages state transitions between analyst phase, signal aggregation, research phase, and risk overlay
- Handles error propagation and recovery
Conditional Logic (conditional_logic.py)¶
- Determines workflow routing based on state conditions
- Implements the Confidence Gate logic:
- High Confidence (>80%): Skip debate -> Research Manager
- Medium/Low Confidence: Route to 1 or 2 rounds of Bull/Bear debate
- Manages debate round limits and termination conditions
Propagation (propagation.py)¶
- Main entry point for workflow execution
- Initializes state and invokes the trading graph
- Handles final decision extraction and report generation
Synchronization (synchronization.py)¶
- Join node for parallel analyst execution when
parallel_executionis enabled - Pass-through: runs when all analyst branches have completed; returns no state changes
- Used only in parallel mode; not present in the sequential graph
Analyst phase: sequential vs parallel¶
The analyst phase (Technical, Sentiment, News, Fundamentals, and Market Context) can run in two modes, controlled by parallel_execution in config (default: True).
Sequential: Analysts run one after another. Parallel (default): All selected analysts start from a fan-out. Each runs independently. When done, they synchronize before proceeding to Signal Aggregation.
3. Agent Layer (backend/agents/)¶
The agent layer contains specialized components that perform analysis and decision-making.
3.1 Analysts (agents/analysts/)¶
Analysts produce structured reports and metrics from market data.
Core analysts (always run, weighted: Technical 30%, Fundamentals 40%, Sentiment 20%, News 10%):
| Agent | Purpose | Mode | Key output |
|---|---|---|---|
| Technical Analyst | Price patterns, trends, technical indicators | Rule-based (pandas-ta), no LLM | technical_report, technical_metrics |
| Fundamentals Analyst | Financial health, valuation, ratios | Deterministic (FMP) | fundamentals_report, fundamentals_metrics |
| Sentiment Analyst | News/social sentiment and trends | Deterministic (transformer) or LLM | sentiment_report, sentiment_metrics |
| News Analyst | Events, earnings, economic calendar, news timeline | Deterministic or LLM | news_report, news_metrics |
| Market Context Analyst | Volatility regime, market trend, sector rotation, risk environment | Deterministic, ticker-agnostic | market_context_report, market_context_metrics |
Edge analysts (optional; dynamically weighted when data is available):
| Agent | Purpose | Mode | Weight when present |
|---|---|---|---|
| Sector Analyst | Sector rotation signal vs SPY | Deterministic | varies |
| Insider Analyst | Insider buying/selling activity | Deterministic | 5% |
| Options Flow Analyst | Unusual options activity | Deterministic | 7% |
| Short Interest Analyst | Short interest ratio and trends | Deterministic | 10% |
| Earnings Revisions Analyst | Analyst estimate revisions | Deterministic | 3% |
When edge analysts are present, their weights are proportionally subtracted from the core analyst weights and normalized to sum to 1.0.
3.2 Signal Processing (backend/orchestration/signal_aggregator_node.py)¶
- Signal Aggregator: Consumes analyst metrics, applies weights, resolves conflicts, and produces a
unified_signalwith a confidence score. This score drives the Confidence Gate logic.
3.3 Researchers (agents/researchers/)¶
Researchers run a structured debate to refine the investment thesis.
- Bull Researcher (
bull_researcher.py): Advocates bullish view; growth opportunities, positive signals; uses memory for past scenarios. - Bear Researcher (
bear_researcher.py): Advocates bearish view; risks, valuation concerns; uses memory for past risk scenarios. - Research Manager (
managers/research_manager.py): Synthesizes analyst reports and debate arguments (if any), produces the finalinvestment_plananddecision(BUY/SELL/HOLD).
3.4 Risk Management (agents/risk_overlay.py)¶
- Risk Overlay: A deterministic component (zero LLM calls) that takes the Research Manager's decision and applies risk rules:
- Position Sizing: Based on confidence, market regime (e.g. Crisis), and risk environment.
- Stop-Loss: Calculated from ATR or Bollinger Bands.
- Risk Adjustments: Modifies the plan based on volatility (VIX) and signal conflicts.
- Output: Writes
final_trade_decisionto state.
4. Service Layer (backend/services/)¶
The service layer provides business logic abstraction between the API/orchestration layer and the repository layer. Services coordinate complex operations, manage transactions, implement caching strategies, and enforce business rules.
Base Service (base.py)¶
BaseService: Abstract base class with common patterns- Transaction management with rollback support
- Retry logic for transient failures
- Error handling and logging utilities
- Both sync and async method support
Signal Service (signal_service.py)¶
- Complete signal lifecycle management (create, retrieve, aggregate, track performance)
- Validation pipeline integration
- Duplicate detection and conflict resolution
- Redis caching for recent signals
- Time-series aggregation using TimescaleDB
Market Data Service (market_data_service.py)¶
- Unified access to multiple data vendors with automatic fallback
- Data quality validation and enrichment
- Multi-level caching (memory → Redis → database)
- Vendor orchestration (primary → secondary → tertiary)
- Stock data, technical indicators, fundamentals, and news APIs
Analytics Service (analytics_service.py)¶
- Volatility analysis (VaR, rolling volatility)
- Correlation matrix calculation (Pearson, Spearman)
- Trend analysis (support/resistance, SMA crossovers)
- Backtesting engine integration
- Statistical metrics and performance analysis
Session Service (session_service.py)¶
- Trading session lifecycle management
- Status tracking (PENDING, RUNNING, COMPLETED, FAILED)
- Result storage and retrieval
- Active session monitoring
- Integration with LangGraph orchestration
Service Factory (factory.py)¶
- Dependency injection for service instances
- Singleton pattern for service management
- Automatic dependency resolution
- Configuration injection
- Support for testing with mock services
Caching Strategy (mixins/cache.py)¶
CacheableMixin: Redis-backed caching for services- Cache-aside pattern (check cache → fetch from DB → populate cache)
- Write-through pattern (write to DB → update cache)
- Configurable TTL and cache invalidation
- Cache metrics (hit rate, latency)
Debate Service (debate_service.py)¶
- Manages multi-agent debates (investment & risk)
- Coordinates debate rounds, speaking order, and topic management
- Persists debate history/transcripts to database
- Calculates debate metrics (consensus, conflict levels)
Stock Profile Service (stock_profile.py)¶
- Managing stock metadata and fundamental metrics
- Caching stock profiles for quick access
- Providing search and filtering capabilities
Agent Memory Service (agent_memory_service.py)¶
- Semantic search for historical agent memories
- Storing and retrieving agent experiences and outcomes
- Context retrieval for agent decision making
Cost Service (cost_service.py)¶
- Tracking token usage and estimated costs per agent/model
- Budget enforcement and alerts
Backtesting Engine (backtesting.py)¶
- Simulating historical scenarios to validate strategies
- Performance metrics (Sharpe, Drawdown, win rate, profit factor)
- Walk-forward validation with configurable train/test windows
- MAE/MFE per-trade analysis (Maximum Adverse/Favorable Excursion)
Weight Service (weight_service.py)¶
- Computes per-analyst prediction accuracy from tracked session outcomes
- Updates
AnalystWeightDB records used bySignalAggregator.update_weights() - Triggered weekly via APScheduler (Sunday 02:00 ET)
5. Data Layer (backend/data/)¶
Provides unified interface to multiple market data sources.
Data Interface (interface.py)¶
- Unified API for accessing market data across vendors
- Vendor selection based on configuration and data type
- Automatic fallback to alternative vendors on failure
- Caching layer for performance optimization
Cache Client (cache.py)¶
- Redis-backed caching with automatic fallback to in-memory cache
- Configurable TTL and eviction policies
- Metrics instrumentation for cache hit/miss rates
- Async-compatible for high-performance data access
Memory System (memory.py)¶
- pgvector-backed vector memory for agent learning
- Stores past trading scenarios, recommendations, and outcomes
- Semantic search for retrieving relevant historical context
- Per-agent memory isolation with shared memory support
LLM Factory (llm_factory.py)¶
- Creates LLM instances based on configuration
- Supports OpenAI, Anthropic, Google Gemini, Ollama, OpenRouter
- Mock LLM support for cost-free development
- Configurable model parameters and retry logic
Price Broadcast Service (price_broadcast_service.py)¶
- Bulk vendor API price fetches every 30 seconds during market hours (09:30–16:00 ET)
- Broadcasts price updates to all subscribed WebSocket clients via channel broadcasting
- Pauses automatically outside market hours; subscribers receive heartbeats only
- Registered as a background task in FastAPI lifespan
Data Vendors (data/vendors/)¶
FMP (fmp.py)
- Primary source for stock price data
- Technical indicators calculated using pandas-ta
- Requires FMP API key (Starter tier available)
- Historical and real-time data support
Alpha Vantage (alpha_vantage_*.py)
- Stock data, technical indicators, fundamentals, news
- Requires API key (free tier available)
- Modular implementation per data type
Google News (google.py)
- News articles and headlines via Google News RSS
- No API key required
- Real-time news aggregation
Reddit (data/utils/reddit_utils.py)
- Social sentiment from financial subreddits
- Requires Reddit API credentials
- Sentiment analysis on post titles and comments
Local Data (local.py)
- Cached data storage for offline development
- CSV-based historical data
- Mock data for testing
Data Tools (data/tools/)¶
LangChain tool wrappers for agent access to data sources.
core_stock_tools.py: Stock price and volume datatechnical_indicators_tools.py: MACD, RSI, moving averagesfundamental_data_tools.py: Financial statements, ratiosnews_data_tools.py: News articles and sentimentagent_utils.py: Shared utilities for tool creation
Stock Profile Subsystem (backend/database/repositories/, backend/services/)¶
Persistent storage and caching for stock metadata and fundamental metrics (symbol, name, sector, industry, market cap, time-series fundamentals). Data access uses the repository layer (see Data access and repository pattern below).
- StockProfileCache (
services/stock_profile_cache.py): Redis-backed cache for stock profiles. Configurable TTL, cache warming, invalidation, and hit/miss metrics. - StockProfileService (
services/stock_profile.py): Service layer overStockProfileRepositoryandFundamentalMetricsRepositoryplus cache. Exposes profile CRUD, search, filters, and fundamental metrics bulk import. Sync and async; cache can be disabled for testing.
6. Infrastructure Layer¶
PostgreSQL with TimescaleDB¶
- Time-series database for storing trading data
- Optimized for time-series queries and aggregations
- Persistent storage for historical analysis
- Port: 5432
Redis Cache¶
- In-memory caching layer for market data
- Reduces API calls and improves performance
- Session storage and rate limiting
- Port: 6379
pgvector Vector Store (Integrated in PostgreSQL)¶
- Vector similarity search for agent memory storage
- Semantic search for historical trading scenarios
- Embeddings-based similarity matching
- Port: 5432 (shared with PostgreSQL)
Prometheus¶
- Metrics collection and time-series storage
- Scrapes
/metricsendpoint every 15 seconds - 15-day retention for historical analysis
- Port: 9090
Grafana¶
- Metrics visualization and dashboards
- Pre-provisioned dashboards for system monitoring
- Alerting and anomaly detection
- Port: 3000
Data access and repository pattern¶
For a diagram of how data flows from services to PostgreSQL and Redis, see Data & Database Overview.
The data persistence layer uses a repository pattern to abstract database access and keep business logic independent of storage details. All repositories live under backend.database.repositories.
- BaseRepository (
repositories/base.py): Generic CRUD,get_paginated(page, page_size, filters, sort),bulk_create,bulk_update, and a transaction context manager. Sync and async methods are provided. - TimescaleRepository (
repositories/timescale_base.py): Extends BaseRepository for time-series models. Addsget_by_time_range,get_latest, andaggregate_by_interval(hour/day/week) using TimescaleDBtime_bucket. - Domain repositories:
SignalRepository,AgentAnalysisRepository,DebateSessionRepository/DebateMessageRepository/DebateVoteRepository,SessionRepository,AgentMemoryRepository,CostRepository,StockProfileRepository/FundamentalMetricsRepository. Each exposes model-specific queries (e.g. by symbol, agent type, time range). For the service-layer API (AgentDatabaseService, DebateDatabaseService, AgentCache, AgentPerformance) and usage examples, see Agent-Database Integration. - Agent memory and pgvector:
AgentMemoryRepository.find_similar_memories(session, embedding, agent_type=..., memory_name=..., limit, threshold)provides pgvector cosine similarity. Optionalmemory_namefilters byextra_metadata->>'memory_name'. See pgvector Setup for HNSW and low-level usage. - Caching:
CacheMixin(repositories/mixins/cache.py) adds optional cache get/set/delete and key generation with configurable TTL. Repositories can be constructed with a cache client (e.g. Redis) for frequently accessed data. - RepositoryFactory (
repositories/factory.py): Provides singleton repository instances viaRepositoryFactory.get(SomeRepository). Callers pass the database session into each repository method; the factory does not hold session state.
Usage example:
from backend.database.repositories import SignalRepository
from backend.database.repositories.factory import RepositoryFactory
from backend.database.repositories.base import SortSpec
repo = RepositoryFactory.get(SignalRepository)
# Paginated list with filters and sort
items, total = repo.get_paginated(
session, page=1, page_size=20,
filters={"symbol": "AAPL"},
sort=[("created_at", False)]
)
# Time-range query (TimescaleDB)
signals = repo.get_by_time_range(session, start=start_dt, end=end_dt, symbol="AAPL")
Data Flow¶
Analysis Workflow¶
sequenceDiagram
participant User
participant CLI
participant TradingGraph
participant Analysts
participant SignalAgg as Signal Aggregator
participant Researchers
participant RiskOverlay
participant DataInterface
User->>CLI: Request analysis (ticker, date)
CLI->>TradingGraph: Initialize state
TradingGraph->>Analysts: Parallel Execution
Analysts->>DataInterface: Fetch Data
DataInterface-->>Analysts: Data
Analysts-->>TradingGraph: Reports + Metrics
TradingGraph->>SignalAgg: Aggregate Signals
SignalAgg-->>TradingGraph: Unified Signal + Confidence
Note over TradingGraph: Confidence Gate Logic
alt High Confidence
TradingGraph->>Researchers: Research Manager (Skip Debate)
else Medium/Low Confidence
TradingGraph->>Researchers: Bull/Bear Debate
Researchers->>Researchers: Debate Rounds
Researchers->>Researchers: Research Manager (Synthesize)
end
Researchers-->>TradingGraph: Investment Plan (BUY/SELL/HOLD)
TradingGraph->>RiskOverlay: Apply Risk Rules (Deterministic)
RiskOverlay-->>TradingGraph: Final Trade Decision (Size, Stops)
TradingGraph-->>CLI: Final Result
CLI-->>User: Display Reports & Decision
Signal Aggregation¶
The Signal Aggregation Layer is a deterministic processing stage that combines all analyst signals into a unified recommendation with confidence scoring. It replaces qualitative text analysis with quantitative decision-making, enabling reproducible results for backtesting and validation.
Components¶
Input: Analyst metrics from Agent State
- technical_metrics: RSI, MACD, trend signals, patterns
- fundamentals_metrics: Piotroski F-Score, Altman Z-Score, DCF, P/E
- sentiment_metrics: Overall score, positive/negative ratio
- news_metrics: Event impacts, earnings surprise
- market_context_metrics: VIX regime, sector rotation, risk environment
Processing: SignalAggregator (backend/orchestration/signal_aggregator.py)
1. Signal Extraction: Normalize each analyst's metrics to -100/+100 scale
2. Weighted Aggregation: Apply configurable weights (default: Tech 30%, Fund 40%, Sent 20%, News 10%)
3. Conflict Resolution: Detect and resolve divergences (e.g., fundamental bearish + technical bullish)
4. Regime Adjustment: Apply market context multiplier (Crisis 0.5x, Risk-Off 0.75x, Elevated 0.85x, Risk-On 1.0x)
5. Confidence Scoring: Calculate reliability based on analyst agreement, data quality, and volatility
Output: Unified signal added to Agent State
{
"unified_score": float, # -100 to +100
"recommendation": str, # BUY/SELL/HOLD
"confidence": float, # 0 to 100
"breakdown": dict, # Per-analyst scores
"adjusted_weights": dict, # After conflict resolution
"regime_modifier": float, # Market context multiplier
"conflicts": list[str], # Detected conflicts
"data_quality": float, # 0 to 100
}
Graph Integration¶
The Signal Aggregator node is inserted into the LangGraph workflow after all analysts complete:
Parallel Mode:
Analysts (parallel) → Synchronize → Signal Aggregator → Msg Clear → Bull Researcher → ...
Sequential Mode:
Analysts (sequential) → Msg Clear → Signal Aggregator → Msg Clear → Bull Researcher → ...
Backtesting Support¶
The BacktestEngine (backend/services/backtesting.py) validates signal quality on historical data:
- Historical Simulation: Test signals on past market conditions
- Performance Metrics: Win rate, Sharpe ratio, max drawdown, profit factor
- Weight Optimization: Grid search to find optimal analyst weights
- Signal Degradation Monitoring: Detect when signal quality declines
Configuration¶
Signal aggregation is highly configurable:
"signal_aggregation": {
"enabled": True,
"weights": {"technical": 0.30, "fundamental": 0.40, "sentiment": 0.20, "news": 0.10},
"regime_modifiers": {"Crisis": 0.50, "Risk-Off": 0.75, "Elevated": 0.85, "Risk-On": 1.0},
"confidence_thresholds": {"high": 75.0, "medium": 50.0, "low": 0.0},
"conflict_resolution": {"fundamental_vs_technical": "favor_fundamental"},
}
See Signal Aggregation for complete documentation.
Configuration Management¶
Configuration Sources (Priority Order)¶
- Environment Variables (highest priority)
REDHOUND_*prefixed variables- Docker Compose
.envfile -
System environment
-
Configuration File (
backend/config/settings.py) DEFAULT_CONFIGdictionary- Vendor selection and API keys
-
Agent parameters and debate rounds
-
Runtime Parameters (lowest priority)
- CLI arguments
- API request parameters
- Programmatic configuration
Key Configuration Options¶
DEFAULT_CONFIG = {
# Execution
"max_debate_rounds": 1, # Bull/bear debate rounds (default)
"parallel_execution": True, # Run analyst phase in parallel (default)
# Mock Mode
"mock_mode": False, # Enable mock LLM/memory
"mock_llm_responses_file": None,
# LLM Configuration
"llm_provider": "openai", # openai, anthropic, google, ollama
"deep_think_llm": "o4-mini", # Deep thinking agents
"quick_think_llm": "gpt-4o-mini", # Quick thinking agents
"backend_url": "https://api.openai.com/v1",
"llm_temperature": 0.7,
# Data Vendors
"data_vendors": {
"core_stock_apis": "fmp",
"technical_indicators": "fmp",
"fundamental_data": "fmp",
"news_data": "local", # Combines Finnhub + Reddit + Google News (free, no API costs)
},
# Directories
"data_cache_dir": "backend/data/data_cache",
"results_dir": "data/results",
# Observability
"metrics_enabled": False,
"logging": {
"level": "INFO",
"format": "json",
},
}
State Management¶
Agent State Schema¶
The trading graph maintains a shared state that flows through all agents:
class AgentState(TypedDict):
# Input
ticker: str
date: str
# Analyst Reports
technical_report: Optional[str]
fundamentals_report: Optional[str]
sentiment_report: Optional[str]
news_report: Optional[str]
market_context_report: Optional[str]
# Analyst Metrics
technical_metrics: Optional[dict]
fundamentals_metrics: Optional[dict]
sentiment_metrics: Optional[dict]
news_metrics: Optional[dict]
market_context_metrics: Optional[dict]
# Signal Aggregation
unified_signal: Optional[dict]
# Research Debate
investment_debate_state: Dict[str, Any]
investment_plan: Optional[str]
# Final Output
final_trade_decision: Optional[str]
# Metadata
selected_analysts: List[str]
config: Dict[str, Any]
State Transitions¶
- Initialization:
ticker,date,selected_analysts,config - Analyst Phase: Populate analyst reports and metrics (sequential or parallel)
- Signal Aggregation: Populate
unified_signal(drives Confidence Gate) - Research Phase: Populate
investment_debate_state(if debated) andinvestment_plan(Research Manager) - Risk Overlay: Populate
final_trade_decision(deterministic sizing/stops) - Completion: Extract final decision
Observability¶
Structured Logging¶
- Framework:
structlogwith JSON output (production) or human-readable (development) - Context propagation:
correlation_id,session_id,ticker,user,agent_name - Log levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
- Rotation: Size-based (10MB) or time-based (daily)
Metrics¶
Prometheus metrics exposed at /metrics:
Counters:
- redhound_api_calls_total: API calls by vendor/agent/operation
- redhound_api_errors_total: API failures
- redhound_events_total: Domain events
- redhound_cache_hits_total, redhound_cache_misses_total: Cache outcomes
Gauges:
- redhound_active_agents: Active agent executions
- redhound_queue_size: Queue depth
Histograms:
- redhound_latency_seconds: External/internal step latency
- redhound_execution_time_seconds: Agent execution time
- redhound_request_duration_seconds: API request duration
Health Checks¶
FastAPI /health endpoint validates:
- PostgreSQL connectivity
- Redis connectivity
- Data vendor availability (optional)
Returns 200 (healthy) or 503 (unhealthy) with detailed dependency status.
Background Monitoring System (Phase 3)¶
The background monitoring system enables autonomous market scanning and opportunity detection. It runs as part of the FastAPI process using APScheduler, with no external worker infrastructure required.
Architecture¶
APScheduler (1-min poll) → ScannerService.execute_scan_async()
├── Resolve universe (S&P 500 / NASDAQ 100 / custom / watchlist / portfolio)
├── ScreenerEngine.screen_symbols() [deterministic, zero LLM]
│ ├── BatchIndicatorCalculator (RSI, MACD, SMA, Bollinger, volume)
│ ├── Condition evaluation and scoring (0-100)
│ └── Preset profiles (oversold bounce, momentum breakout, value accumulation, earnings catalyst)
├── Dedup check (skip if analyzed in last 24h unless >3% price movement)
├── Save ScanResult + Opportunity records
├── Auto-analysis: trigger RedhoundGraph for top candidates above threshold
└── Notification: in-app notification for detected opportunities
Components¶
| Component | Location | Purpose |
|---|---|---|
| ScreenerEngine | backend/services/screener_engine.py |
Deterministic quick screen (zero LLM tokens). Evaluates symbols against technical/fundamental conditions and produces a 0-100 opportunity score. |
| ScannerService | backend/services/scanner_service.py |
Orchestrates scan cycles: universe resolution, screening, dedup, auto-analysis trigger, and notification. |
| OpportunityService | backend/services/opportunity_service.py |
CRUD for detected opportunities with status transitions (new → viewed → dismissed / acted_on). |
| Scanner Scheduler | backend/services/alert_scheduler.py |
APScheduler cron job that polls for due scanner configs every minute during market hours. |
Database Tables¶
| Table | Purpose |
|---|---|
scanner_configs |
User-configured market scanners with conditions, universe, frequency, and auto-analyze settings |
scan_runs |
Execution log for each scan cycle (status, matches, analyses triggered) |
scan_results |
Individual symbol matches within a scan run with scores and metrics |
opportunities |
Detected trading opportunities surfaced to users |
API Endpoints¶
- Scanner:
POST/GET/PUT/DELETE /api/v1/scanner/configs,POST .../run,POST .../pause,POST .../resume,GET /api/v1/scanner/runs,GET .../results - Opportunities:
GET /api/v1/opportunities(with filters),PUT .../status,POST .../analyze - Monitor:
GET /api/v1/monitor/status,GET .../history,GET .../usage
Frontend Pages¶
/scanner— Configure and manage market scanners, view scan results/opportunities— Browse detected opportunities with filters (type, score, time period)/monitor— Dashboard showing active scanner status, scan history, and resource usage
Concurrency and Rate Limiting¶
- Scanner poll job uses an
asyncio.Semaphore(default max 3 concurrent scans) to prevent resource exhaustion. - Individual scanner configs control their own frequency via
frequency_minutesandnext_run_at. - Dedup logic prevents re-analyzing the same ticker within 24 hours unless price moves >3%.
- Auto-analysis is capped at 5 analyses per scan cycle.
Security Considerations¶
API Key Management¶
- Store API keys in environment variables or
.envfile - Never commit API keys to version control
- Use
.secrets.baselinefor secret detection - Rotate keys regularly
Input Validation¶
- Validate ticker symbols and date ranges
- Sanitize user inputs to prevent injection attacks
- Rate limiting on API endpoints
Dependency Security¶
- Regular dependency updates via Dependabot
- Security scanning with
banditandpip-audit - Trivy scanning for Docker image vulnerabilities
- Pre-commit hooks for secret detection
Network Security¶
- Docker network isolation for services
- Expose only necessary ports
- Use TLS for production deployments
- Secure Grafana and Prometheus with authentication