Skip to content

Configuration Reference

This document provides a comprehensive reference for all configuration options in the Redhound trading system.

Configuration System Overview

Redhound uses a modern, type-safe configuration system built with Pydantic that supports:

  • Environment-based configuration: Load settings from .env files or environment variables
  • Type validation: Automatic type checking and validation using Pydantic
  • Multi-environment support: Easy switching between dev/staging/prod environments
  • Single source of truth: Centralized configuration management
  • Backward compatibility: Fully compatible with legacy dictionary-based config

Two Ways to Use Configuration

from backend.config import get_config

# Get typed configuration instance
config = get_config()

# Type-safe access with autocomplete
print(config.api.llm_provider)        # "openai"
print(config.agents.max_debate_rounds) # 1
print(config.database.postgres.host)   # "localhost"

2. Legacy: Dictionary-Based Configuration (Backward Compatible)

from backend.config.settings import DEFAULT_CONFIG

# Dictionary access (backward compatible)
print(DEFAULT_CONFIG["llm_provider"])        # "openai"
print(DEFAULT_CONFIG["max_debate_rounds"])   # 1
print(DEFAULT_CONFIG["logging"]["level"])    # "INFO"

Configuration Sources

Configuration is loaded from multiple sources in the following priority order (highest to lowest):

  1. Environment Variables (highest priority) - e.g., REDHOUND_LLM_PROVIDER=anthropic
  2. .env File - Place a .env file in the project root
  3. Default Values (lowest priority) - Defined in configuration classes

Environment variables override .env file settings, and both override default values.

Loading Configuration

from backend.config import Config

# Load from environment variables and .env file
config = Config.from_env()

# Load from specific .env file
config = Config.from_env(env_file=".env.production")

# Validate configuration on startup
config.validate_on_startup()  # Raises ValueError if invalid

Core Configuration

Execution Settings

max_debate_rounds

Type: int Default: 1 Environment Variable: REDHOUND_MAX_DEBATE_ROUNDS

Number of debate rounds between bull and bear researchers. More rounds = more thorough analysis but higher cost and longer execution time.

config["max_debate_rounds"] = 1  # Quick (default)
config["max_debate_rounds"] = 2  # Standard
config["max_debate_rounds"] = 3  # Deep

use_technical_analyst

Type: bool Default: True Environment Variable: REDHOUND_USE_TECHNICAL_ANALYST

Enable rule-based technical analyst using pandas-ta instead of LLM-based analysis. Provides zero-cost technical analysis with consistent, reproducible results.

config["use_technical_analyst"] = True   # Rule-based (default, no LLM costs)
config["use_technical_analyst"] = False  # LLM-based (legacy)

parallel_execution

Type: bool Default: True Environment Variable: REDHOUND_PARALLEL_EXECUTION

When enabled, the five analyst agents (Technical, Sentiment, News, Fundamentals, and Market Context) run in parallel from a common start point instead of sequentially. All analyst reports are collected at a Synchronize node before proceeding to the signal aggregation phase. Reduces overall execution time when analysts have no dependencies on each other.

config["parallel_execution"] = True   # Parallel (default)
config["parallel_execution"] = False  # Sequential
# Enable parallel (default when unset)
export REDHOUND_PARALLEL_EXECUTION=true

# Disable: use sequential execution
export REDHOUND_PARALLEL_EXECUTION=false

Selected Analysts

selected_analysts

Type: List[str] Default: ["technical", "fundamentals", "sentiment", "news"] Environment Variable: REDHOUND_SELECTED_ANALYSTS (comma-separated)

Which analysts to run in the analysis workflow.

Valid Values: - technical: Technical analysis (price patterns, indicators) — rule-based, no LLM - fundamentals: Fundamental analysis (financials, ratios, DCF) — deterministic, no LLM - sentiment: Sentiment analysis (DeBERTa-v3 or LLM-based) - news: News analysis (headlines, macroeconomic events) - market_context: Market-wide regime detection (VIX, breadth) — deterministic, ticker-agnostic

# Default (all standard analysts)
config["selected_analysts"] = ["technical", "fundamentals", "sentiment", "news"]

# All analysts including market context
config["selected_analysts"] = ["technical", "fundamentals", "sentiment", "news", "market_context"]

# Technical and fundamentals only
config["selected_analysts"] = ["technical", "fundamentals"]
# Environment variable
export REDHOUND_SELECTED_ANALYSTS="technical,fundamentals"

LLM Configuration

Provider Settings

llm_provider

Type: str Default: "openai" Environment Variable: REDHOUND_LLM_PROVIDER

LLM provider to use for agent reasoning.

Valid Values: - openai: OpenAI (GPT-4, GPT-5, o-series) - anthropic: Anthropic (Claude models) - google: Google (Gemini models) - xai: xAI (Grok models) - ollama: Ollama (local models) - openrouter: OpenRouter (multiple providers)

config["llm_provider"] = "openai"
config["llm_provider"] = "anthropic"
config["llm_provider"] = "google"
config["llm_provider"] = "xai"

deep_think_llm

Type: str Default: "o4-mini" Environment Variable: REDHOUND_DEEP_THINK_LLM

LLM model used for deep thinking agents (researchers, managers). Used for complex reasoning tasks requiring thorough analysis.

config["deep_think_llm"] = "o4-mini"  # Default
config["deep_think_llm"] = "gpt-4"    # Higher quality
config["deep_think_llm"] = "claude-3-opus-20240229"  # Alternative

quick_think_llm

Type: str Default: "gpt-4o-mini" Environment Variable: REDHOUND_QUICK_THINK_LLM

LLM model used for quick thinking agents (analysts). Used for faster, less complex analysis tasks.

config["quick_think_llm"] = "gpt-4o-mini"  # Default
config["quick_think_llm"] = "gpt-3.5-turbo"  # Faster alternative
config["quick_think_llm"] = "claude-3-haiku-20240307"  # Alternative

backend_url

Type: str Default: "https://api.openai.com/v1" Environment Variable: REDHOUND_BACKEND_URL

Base URL for the LLM API backend. Used for custom API endpoints or proxy configurations.

config["backend_url"] = "https://api.openai.com/v1"  # Default OpenAI
config["backend_url"] = "https://api.anthropic.com/v1"  # Anthropic
config["backend_url"] = "https://api.x.ai/v1"  # xAI
config["backend_url"] = "https://openrouter.ai/api/v1"  # OpenRouter

openai_reasoning_effort

Type: str Default: "medium" Environment Variable: REDHOUND_OPENAI_REASONING_EFFORT

Reasoning effort for OpenAI reasoning models (e.g. o-series, GPT-4.1). Values: low, medium, high.

google_thinking_level

Type: str Default: "medium" Environment Variable: REDHOUND_GOOGLE_THINKING_LEVEL

Thinking level for Google Gemini models that support extended thinking. Values: minimal, low, medium, high.

API Keys

API keys are configured via environment variables only (never in code or configuration files).

# OpenAI
export OPENAI_API_KEY=sk-...

# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...

# Google
export GOOGLE_API_KEY=...

# OpenRouter
export OPENROUTER_API_KEY=sk-or-...

# xAI
export XAI_API_KEY=...

Data Vendor Configuration

Vendor Selection

stock_data_vendor

Type: str Default: "fmp" Environment Variable: REDHOUND_STOCK_DATA_VENDOR

Data vendor for stock price data.

Valid Values: - fmp: Financial Modeling Prep (requires FMP_API_KEY) - alpha_vantage: Alpha Vantage (requires API key) - local: Local CSV files

config["stock_data_vendor"] = "fmp"

technical_indicators_vendor

Type: str Default: "fmp" Environment Variable: REDHOUND_TECHNICAL_INDICATORS_VENDOR

Data vendor for technical indicators (MACD, RSI, moving averages).

Valid Values: - fmp: Financial Modeling Prep (requires FMP_API_KEY) - alpha_vantage: Alpha Vantage (requires API key) - local: Local calculations

config["technical_indicators_vendor"] = "fmp"

fundamental_data_vendor

Type: str Default: "local" Environment Variable: REDHOUND_FUNDAMENTAL_DATA_VENDOR

Data vendor for fundamental data used by data tools and other consumers. The Fundamentals Analyst agent does not use this setting; it uses fundamental_analysis.data_source (default fmp). See Fundamental Analysis Configuration for the analyst.

Valid Values: - local: Local data files (SimFin, free, no API costs) - alpha_vantage: Alpha Vantage (requires API key, paid tier needed for >25 calls/day)

config["fundamental_data_vendor"] = "local"  # Default (free)
config["fundamental_data_vendor"] = "alpha_vantage"  # Requires paid tier for high volume

news_data_vendor

Type: str Default: "local" Environment Variable: REDHOUND_NEWS_DATA_VENDOR

Data vendor for news articles and headlines. The "local" vendor combines Finnhub (local files), Reddit (local files), and Google News (live scraping) for zero-cost news aggregation.

Valid Values: - local: Local news files + Google News scraping (free, no API costs, default) - google: Google News only (free, no API key) - alpha_vantage: Alpha Vantage (requires API key, paid tier needed for >25 calls/day) - openai: OpenAI (web search via tools)

config["news_data_vendor"] = "local"  # Default (free: Finnhub + Reddit + Google News)
config["news_data_vendor"] = "google"  # Google News only (free)
config["news_data_vendor"] = "alpha_vantage"  # Requires paid tier for high volume

Market Context Analysis Configuration

market_context

Configuration for deterministic market-wide regime detection and risk environment analysis. This analyst is ticker-agnostic and provides critical context for risk management decisions.

config["market_context"] = {
    "enabled": True,  # Enable market context analyst
    "lookback_days": 60,  # Days of historical data for analysis
    "cache_enabled": True,  # Enable Redis caching
    "cache_ttl_seconds": 86400,  # Cache TTL (24 hours / 1 day)
    # Data source symbols
    "vix_symbol": "^VIX",  # Volatility index
    "market_index": "SPY",  # Market benchmark
    "bond_etf": "TLT",  # Treasury bonds (flight-to-safety)
    "gold_etf": "GLD",  # Gold (alternative safe haven)
    "cyclical_etf": "XLY",  # Consumer discretionary (cyclical)
    "defensive_etf": "XLP",  # Consumer staples (defensive)
    # VIX thresholds for regime classification
    "vix_thresholds": {
        "low": 15,  # VIX < 15: Low volatility
        "normal": 25,  # 15 <= VIX < 25: Normal volatility
        "elevated": 35,  # 25 <= VIX < 35: Elevated volatility
        # VIX >= 35: Crisis volatility
    },
}

Environment Variables:

# Enable market context analyst (default: true)
export REDHOUND_MARKET_CONTEXT_ENABLED=true

# Lookback period for market data (default: 60 days)
export REDHOUND_MARKET_CONTEXT_LOOKBACK_DAYS=60

# Enable caching (default: true)
export REDHOUND_MARKET_CONTEXT_CACHE_ENABLED=true

# Cache TTL in seconds (default: 86400 = 24 hours)
export REDHOUND_MARKET_CONTEXT_CACHE_TTL=86400

# Data source symbols (defaults shown)
export REDHOUND_MARKET_CONTEXT_VIX_SYMBOL="^VIX"
export REDHOUND_MARKET_CONTEXT_INDEX="SPY"
export REDHOUND_MARKET_CONTEXT_BOND_ETF="TLT"
export REDHOUND_MARKET_CONTEXT_GOLD_ETF="GLD"
export REDHOUND_MARKET_CONTEXT_CYCLICAL_ETF="XLY"
export REDHOUND_MARKET_CONTEXT_DEFENSIVE_ETF="XLP"

# VIX thresholds for regime classification
export REDHOUND_MARKET_CONTEXT_VIX_LOW=15
export REDHOUND_MARKET_CONTEXT_VIX_NORMAL=25
export REDHOUND_MARKET_CONTEXT_VIX_ELEVATED=35

Features:

  • Volatility Regime Classification: VIX-based (Low/Normal/Elevated/Crisis)
  • Market Trend Detection: Bull/Bear/Sideways based on SMA50/SMA200
  • Sector Rotation Analysis: Cyclical vs defensive strength (XLY/XLP ratio)
  • Cross-Asset Correlation: SPY vs TLT vs GLD (20-day rolling)
  • Risk Environment Detection: Risk-On vs Risk-Off with confidence score
  • Zero Cost: Deterministic only, no AI tokens
  • Performance: < 5 seconds execution time
  • Caching: 1-day TTL, reusable across all tickers

Integration:

The Market Context Analyst provides market_context_report (markdown) and market_context_metrics (dict) to the Risk Overlay, which uses this information to adjust position sizing and risk tolerance based on overall market conditions.

To enable in the graph, add "market_context" to selected_analysts:

selected_analysts = ["technical", "sentiment", "news", "fundamentals", "market_context"]

Technical Analysis Configuration

technical_analysis

Configuration for rule-based technical analyst using pandas-ta.

config["technical_analysis"] = {
    "lookback_days": 252,  # Trading days to analyze (1 year default)
    "enable_numba_jit": False,  # Enable Numba JIT for performance
    "indicators": {
        # Trend indicators
        "sma_periods": [50, 200],
        "ema_periods": [10, 20, 50],
        "macd_params": {"fast": 12, "slow": 26, "signal": 9},
        # Momentum indicators
        "rsi_period": 14,
        "stoch_params": {"k": 14, "d": 3},
        "willr_period": 14,
        # Volatility indicators
        "bb_params": {"period": 20, "std": 2},
        "atr_period": 14,
        # Volume indicators
        "vwma_period": 20,
        "volume_sma_period": 20,
    },
    "thresholds": {
        "rsi_overbought": 70,
        "rsi_oversold": 30,
        "stoch_overbought": 80,
        "stoch_oversold": 20,
    },
}

Environment Variables:

# Lookback period (trading days)
export REDHOUND_TA_LOOKBACK_DAYS=252

# Enable Numba JIT compilation for performance
export REDHOUND_TA_ENABLE_NUMBA=false

# Batch Indicator Calculation (Issue #41)
# 5-10x performance improvement for large datasets
export REDHOUND_TA_BATCH_ENABLED=true
export REDHOUND_TA_BATCH_CACHE=true
export REDHOUND_TA_BATCH_NUMBA=false

batch_calculation

Configuration for batch processing of technical indicators.

config["technical_analysis"]["batch_calculation"] = {
    "enabled": True,        # Enable batch calculation (5-10x faster)
    "cache_results": True,  # Cache batch results
    "use_numba": False,     # Use Numba JIT for pandas-ta
}

Candlestick Pattern Recognition (TA-Lib)

Configuration for candlestick pattern recognition using TA-Lib.

config["technical_analysis"]["candlestick_patterns"] = {
    "enabled": True,   # Enabled by default (auto-skipped if lib missing)
    "patterns": [
        "CDLDOJI", "CDLHAMMER", "CDLENGULFING", "CDLHARAMI",
        "CDLMORNINGSTAR", "CDLEVENINGSTAR", "CDLSHOOTINGSTAR",
        "CDLTHREEWHITESOLDIERS", "CDLTHREEBLACKCROWS",
        "CDLSPINNINGTOP", "CDLHANGINGMAN", "CDLINVERTEDHAMMER"
    ],
    "min_pattern_strength": 0,  # -100 to 100 (TA-Lib strength values)
    "lookback_days": 5,         # Days to check for patterns
    "cache_enabled": True,      # Cache pattern results for performance
}

Environment Variables:

# Enable candlestick pattern recognition (requires TA-Lib)
export REDHOUND_TA_PATTERNS_ENABLED=true

# List of patterns to detect (comma-separated)
export REDHOUND_TA_PATTERNS_LIST="CDLDOJI,CDLHAMMER,CDLENGULFING,CDLHARAMI"

# Minimum pattern strength (-100 to 100)
# +100 = strong bullish, -100 = strong bearish, 0 = no pattern
export REDHOUND_TA_PATTERN_MIN_STRENGTH=0

# Number of recent days to scan for patterns
export REDHOUND_TA_PATTERN_LOOKBACK=5

# Enable caching of pattern detection results
export REDHOUND_TA_PATTERNS_CACHE=true

Optional Feature

Candlestick pattern recognition is an optional feature that requires the TA-Lib C library and Python wrapper to be installed. See the Developer Onboarding guide for installation instructions.

Fundamental Analysis Configuration

The Fundamentals Analyst is deterministic (no LLM) and uses its own fundamental_analysis config. It does not use fundamental_data_vendor; that setting applies to other consumers (e.g. data tools).

fundamental_analysis

Configuration for the deterministic fundamental analyst (Piotroski F-Score, Altman Z-Score, 50+ ratios, DCF valuation, growth analysis).

config["fundamental_analysis"] = {
    "enabled": True,           # Use deterministic fundamental analyst
    "data_source": "fmp",  # fmp (requires FMP_API_KEY)
    "cache_enabled": True,
    "cache_ttl_statements": 86400,   # 24 hours (seconds)
    "cache_ttl_profile": 604800,     # 7 days (seconds)
    "include_sector_comparison": False,  # Future feature
}

Environment Variables:

# Enable deterministic fundamental analyst (default: true)
export REDHOUND_FUNDAMENTAL_ENABLED=true

# Data source: fmp (default, requires FMP_API_KEY)
export REDHOUND_FUNDAMENTAL_DATA_SOURCE=fmp

# Enable caching of financial statements and company profile
export REDHOUND_FUNDAMENTAL_CACHE_ENABLED=true

# Cache TTL for financial statements (seconds, default 86400 = 24h)
export REDHOUND_FUNDAMENTAL_CACHE_TTL_STATEMENTS=86400

# Cache TTL for company profile (seconds, default 604800 = 7 days)
export REDHOUND_FUNDAMENTAL_CACHE_TTL_PROFILE=604800

# Optional: sector comparison (future feature)
export REDHOUND_FUNDAMENTAL_SECTOR_COMPARISON=false

Set FMP_API_KEY in the environment to use the fundamental analyst.

Sentiment Analysis Configuration

The Sentiment Analyst supports both deterministic (DeBERTa-v3) and LLM-based modes. The deterministic mode provides zero-cost, reproducible sentiment analysis using state-of-the-art transformer models fine-tuned on financial news.

sentiment_analysis

Configuration for sentiment analysis mode and model selection.

config["sentiment_analysis"] = {
    "use_deterministic_analyst": True,  # True: DeBERTa-v3 (zero cost), False: LLM-based
    "model_id": "mrm8488/deberta-v3-ft-financial-news-sentiment-analysis",  # Primary model (99.4% accuracy)
    "fallback_model_id": "mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis",  # Fallback (98.2%)
    "lookback_days": 7,  # Number of days to analyze
    "cache_enabled": True,  # Cache sentiment results
    "score_summaries": True,  # Score article summaries in addition to headlines
    "headline_weight": 0.6,  # Weight for headline when both headline and summary scored
    "summary_weight": 0.4,  # Weight for summary when both scored
    "relevance_threshold": 0.3,  # Min relevance_score to include (filter low-relevance)
    "similarity_threshold": 0.85,  # NewsClusterer threshold for deduplication (0–1)
    "summary_max_chars": 2000,  # Max chars for summary before head+tail truncation (≈512 tokens)
}

Environment Variables:

# Enable deterministic sentiment analyst (default: true)
export REDHOUND_SENTIMENT_DETERMINISTIC=true

# Primary model for sentiment analysis
export REDHOUND_SENTIMENT_MODEL_ID=mrm8488/deberta-v3-ft-financial-news-sentiment-analysis

# Fallback model if primary fails to load
export REDHOUND_SENTIMENT_FALLBACK_MODEL_ID=mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis

# Days to analyze (default: 7)
export REDHOUND_SENTIMENT_LOOKBACK_DAYS=7

# Enable caching (default: true)
export REDHOUND_SENTIMENT_CACHE_ENABLED=true

# Score article summaries in addition to headlines (default: true)
export REDHOUND_SENTIMENT_SCORE_SUMMARIES=true

# Headline weight when both headline and summary scored (default: 0.6)
export REDHOUND_SENTIMENT_HEADLINE_WEIGHT=0.6

# Summary weight when both scored (default: 0.4)
export REDHOUND_SENTIMENT_SUMMARY_WEIGHT=0.4

# Min relevance_score to include (default: 0.3)
export REDHOUND_SENTIMENT_RELEVANCE_THRESHOLD=0.3

# NewsClusterer similarity threshold for deduplication (default: 0.85)
export REDHOUND_SENTIMENT_SIMILARITY_THRESHOLD=0.85

# Max chars for summary before head+tail truncation (default: 2000)
export REDHOUND_SENTIMENT_SUMMARY_MAX_CHARS=2000

Model Information:

  • Primary Model: DeBERTa-v3 fine-tuned on financial news (99.4% accuracy)
  • Fallback Model: DistilRoBERTa fine-tuned on financial news (98.2% accuracy)
  • Inference: CPU-friendly, no GPU required
  • Cost: Zero LLM token costs in deterministic mode
  • Performance: <3 seconds for typical 7-day analysis

Mode Comparison:

Feature Deterministic (DeBERTa-v3) LLM-Based
Cost per Analysis $0.00 $0.10 - $0.50
Execution Time <3 seconds 10-30 seconds
Accuracy 99.4% (deterministic) 70-80% (variable)
Reproducibility 100% (same input = same output) Variable (stochastic)
Sentiment Score Explicit (-1 to +1) Implicit (text only)

News Analysis Configuration

The News Analyst supports both deterministic (Step 3A: event detection, clustering, earnings) and LLM-based modes. The deterministic mode provides zero-cost, reproducible news analysis using regex event detection, TF-IDF clustering for deduplication, and vendor earnings surprise.

news_analysis

Configuration for news analysis mode and deterministic pipeline options.

config["news_analysis"] = {
    "use_deterministic_analyst": True,  # True: event detector + clusterer + earnings + economics (zero cost), False: LLM-based
    "event_categories": ["earnings", "m&a", "product", "legal", "leadership", "partnership"],  # Event detector categories
    "similarity_threshold": 0.85,  # TF-IDF clustering threshold (0–1)
    "lookback_days": 7,  # Number of days of news to analyze
    "cache_enabled": True,  # Cache earnings data (24h TTL)
    "max_top_articles": 5,  # Max articles for LLM context (top relevant + event-tagged)
    "relevance_threshold": 0.3,  # Min relevance_score to include (filter low-relevance)
    "top_article_summary_max_chars": 200,  # Max chars for summary in top articles (head+tail truncation)
    # Economic Calendar (Step 3B)
    "fred_enabled": True,  # Enable FRED economic calendar integration
    "fred_api_key": "your_fred_api_key",  # FRED API key (free from fred.stlouisfed.org)
    "economic_lookback_days": 7,  # Recent economic releases lookback
    "economic_lookahead_days": 14,  # Upcoming economic releases lookahead
    "fred_cache_ttl": 86400,  # FRED data cache TTL (24 hours)
}

Environment Variables:

# Enable deterministic news analyst (default: true)
export REDHOUND_NEWS_DETERMINISTIC=true

# TF-IDF similarity threshold for clustering duplicate news (default: 0.85)
export REDHOUND_NEWS_SIMILARITY_THRESHOLD=0.85

# Days of news to analyze (default: 7)
export REDHOUND_NEWS_LOOKBACK_DAYS=7

# Enable earnings cache (default: true)
export REDHOUND_NEWS_CACHE_ENABLED=true

# Top articles for LLM context (default: 5)
export REDHOUND_NEWS_MAX_TOP_ARTICLES=5

# Min relevance_score to include in top articles (default: 0.3)
export REDHOUND_NEWS_RELEVANCE_THRESHOLD=0.3

# Max chars for summary in top articles (default: 200)
export REDHOUND_NEWS_TOP_ARTICLE_SUMMARY_MAX_CHARS=200

# Economic Calendar (Step 3B) - FRED API
export FRED_API_KEY=your_fred_api_key_here  # Free from https://fred.stlouisfed.org/docs/api/api_key.html
export REDHOUND_FRED_ENABLED=true
export REDHOUND_ECONOMIC_LOOKBACK_DAYS=7
export REDHOUND_ECONOMIC_LOOKAHEAD_DAYS=14
export REDHOUND_FRED_CACHE_TTL=86400

Deterministic pipeline (Step 3A + 3B):

  • Event detector: Regex patterns for 6 categories (earnings, M&A, product, legal, leadership, partnership)
  • News clusterer: TF-IDF + cosine similarity for deduplication
  • Earnings: Vendor earnings dates and surprise calculation, cached 24h
  • Economic calendar (Step 3B): FRED API integration for 8 key indicators (CPI, GDP, Unemployment, PMI, Retail Sales, Housing Starts, Fed Funds, Initial Claims)
  • Impact scoring: High/medium/low classification based on change thresholds
  • Report: Fixed markdown (executive summary, macroeconomic context, economic calendar, events timeline, earnings analysis, news timeline, event table)
  • Performance: <3 seconds, zero LLM cost; LLM path available when use_deterministic_analyst=False

Data Vendor API Keys

# Alpha Vantage (optional, only needed if overriding defaults to use alpha_vantage vendor)
# Free tier: 25 requests/day, 1 request/second; paid tier ($49.99+/month) required for >25 calls/day.
# Default vendors are now "local" (free) for news and fundamental data.
export ALPHA_VANTAGE_API_KEY=...

# Reddit (optional, only needed if scraping Reddit data; local vendor reads from pre-scraped files)
# Note: Reddit API free tier: 100 requests/minute with OAuth; paid: $0.24 per 1,000 requests
export REDDIT_CLIENT_ID=...
export REDDIT_CLIENT_SECRET=...
export REDDIT_USER_AGENT=...

Directory Configuration

Data Directories

data_cache_dir

Type: str Default: "backend/data/data_cache" Environment Variable: REDHOUND_DATA_CACHE_DIR

Directory for caching market data (CSV files, API responses).

config["data_cache_dir"] = "backend/data/data_cache"

results_dir

Type: str Default: "data/results" Environment Variable: REDHOUND_RESULTS_DIR

Directory for saving analysis reports and trading decisions.

config["results_dir"] = "data/results"

Reports are saved to: {results_dir}/{ticker}/{date}/reports/

data_dir

Type: str Default: "data" Environment Variable: REDHOUND_DATA_DIR

Base data directory for all data-related files.

config["data_dir"] = "data"

Mock Mode Configuration

Mock Mode Settings

mock_mode

Type: bool Default: False Environment Variable: REDHOUND_MOCK_MODE

Enable mock mode for cost-free development and testing. When enabled, all LLM calls and embeddings are replaced with predefined responses. The CLI --mock flag overrides this setting when present.

config["mock_mode"] = True  # Enable mock mode
config["mock_mode"] = False  # Use real APIs
# CLI (overrides env for that run)
redhound --mock
redhound analyze --mock

# Environment variable
export REDHOUND_MOCK_MODE=true

mock_llm_responses_file

Type: Optional[str] Default: None Environment Variable: REDHOUND_MOCK_RESPONSES

Path to custom mock responses JSON file. If not specified, uses default responses.

config["mock_llm_responses_file"] = "/path/to/custom-responses.json"
export REDHOUND_MOCK_RESPONSES=/path/to/custom-responses.json

mock_memory_preloaded

Type: bool Default: True Environment Variable: REDHOUND_MOCK_MEMORY_PRELOADED

Whether to preload example memories in mock mode.

config["mock_memory_preloaded"] = True  # Include example scenarios
config["mock_memory_preloaded"] = False  # Empty memory

See Mock Mode documentation for complete details.

Logging Configuration

Log Settings

REDHOUND_LOG_LEVEL

Type: str Default: "INFO" Environment Variable: REDHOUND_LOG_LEVEL

Logging level for application logs.

Valid Values: - DEBUG: Detailed debugging information - INFO: General informational messages - WARNING: Warning messages - ERROR: Error messages - CRITICAL: Critical errors

export REDHOUND_LOG_LEVEL=DEBUG
export REDHOUND_LOG_LEVEL=INFO
export REDHOUND_LOG_LEVEL=WARNING

REDHOUND_LOG_FORMAT

Type: str Default: "json" Environment Variable: REDHOUND_LOG_FORMAT

Log output format.

Valid Values: - json: Machine-readable JSON format (production) - human-readable: Colored console format (development)

export REDHOUND_LOG_FORMAT=json
export REDHOUND_LOG_FORMAT=human-readable

REDHOUND_LOG_CONSOLE

Type: bool Default: True Environment Variable: REDHOUND_LOG_CONSOLE

Enable console logging output.

export REDHOUND_LOG_CONSOLE=true
export REDHOUND_LOG_CONSOLE=false

REDHOUND_LOG_FILE_ENABLED

Type: bool Default: False Environment Variable: REDHOUND_LOG_FILE_ENABLED

Enable file logging output.

export REDHOUND_LOG_FILE_ENABLED=true
export REDHOUND_LOG_FILE_ENABLED=false

REDHOUND_LOG_FILE

Type: str Default: "redhound/logs/redhound.log" Environment Variable: REDHOUND_LOG_FILE

Path to log file (when file logging is enabled).

export REDHOUND_LOG_FILE=redhound/logs/redhound.log

REDHOUND_LOG_MAX_BYTES

Type: int Default: 10485760 (10MB) Environment Variable: REDHOUND_LOG_MAX_BYTES

Maximum log file size before rotation (size-based rotation).

export REDHOUND_LOG_MAX_BYTES=10485760  # 10MB

REDHOUND_LOG_BACKUP_COUNT

Type: int Default: 5 Environment Variable: REDHOUND_LOG_BACKUP_COUNT

Number of rotated log files to keep.

export REDHOUND_LOG_BACKUP_COUNT=5

REDHOUND_LOG_RETENTION_DAYS

Type: int Default: 30 Environment Variable: REDHOUND_LOG_RETENTION_DAYS

Number of days to retain rotated log files before deletion.

export REDHOUND_LOG_RETENTION_DAYS=30

REDHOUND_LOG_ROTATION

Type: str Default: "size" Environment Variable: REDHOUND_LOG_ROTATION

Log rotation strategy.

Valid Values: - size: Rotate when file reaches max size - time: Rotate daily

export REDHOUND_LOG_ROTATION=size
export REDHOUND_LOG_ROTATION=time

Metrics Configuration

Metrics Settings

REDHOUND_METRICS_ENABLED

Type: bool Default: False Environment Variable: REDHOUND_METRICS_ENABLED

Enable Prometheus metrics exposition.

export REDHOUND_METRICS_ENABLED=true
export REDHOUND_METRICS_ENABLED=false

REDHOUND_METRICS_PATH

Type: str Default: "/metrics" Environment Variable: REDHOUND_METRICS_PATH

HTTP path for metrics endpoint.

export REDHOUND_METRICS_PATH=/metrics

REDHOUND_METRICS_PORT

Type: int Default: 8000 Environment Variable: REDHOUND_METRICS_PORT

Port for metrics endpoint.

export REDHOUND_METRICS_PORT=8000

REDHOUND_METRICS_SAMPLING_RATE

Type: float Default: 1.0 Environment Variable: REDHOUND_METRICS_SAMPLING_RATE

Sampling rate for metrics collection (0.0 to 1.0). Use lower values to reduce overhead.

export REDHOUND_METRICS_SAMPLING_RATE=1.0  # 100% sampling
export REDHOUND_METRICS_SAMPLING_RATE=0.1  # 10% sampling

REDHOUND_METRICS_LABELS

Type: str Default: "" Environment Variable: REDHOUND_METRICS_LABELS

Constant labels to add to all metrics (comma-separated key=value pairs).

export REDHOUND_METRICS_LABELS="environment=production,region=us-east-1"

Health Check Configuration

Health Check Settings

REDHOUND_HEALTHCHECK_ENABLED

Type: bool Default: True Environment Variable: REDHOUND_HEALTHCHECK_ENABLED

Enable health check endpoint.

export REDHOUND_HEALTHCHECK_ENABLED=true
export REDHOUND_HEALTHCHECK_ENABLED=false

REDHOUND_HEALTHCHECK_PATH

Type: str Default: "/health" Environment Variable: REDHOUND_HEALTHCHECK_PATH

HTTP path for health check endpoint.

export REDHOUND_HEALTHCHECK_PATH=/health

REDHOUND_HEALTHCHECK_CACHE_TTL_SECONDS

Type: float Default: 0.0 Environment Variable: REDHOUND_HEALTHCHECK_CACHE_TTL_SECONDS

Cache TTL for health check results in seconds. Set to 0 to disable caching.

export REDHOUND_HEALTHCHECK_CACHE_TTL_SECONDS=5.0  # 5 seconds
export REDHOUND_HEALTHCHECK_CACHE_TTL_SECONDS=0.0  # No caching

REDHOUND_HEALTHCHECK_TIMEOUT_DEFAULT

Type: float Default: 1.0 Environment Variable: REDHOUND_HEALTHCHECK_TIMEOUT_DEFAULT

Default timeout for health checks in seconds.

export REDHOUND_HEALTHCHECK_TIMEOUT_DEFAULT=1.0

REDHOUND_HEALTHCHECK_TIMEOUT_DATABASE

Type: float Default: 1.0 Environment Variable: REDHOUND_HEALTHCHECK_TIMEOUT_DATABASE

Timeout for database health check in seconds.

export REDHOUND_HEALTHCHECK_TIMEOUT_DATABASE=2.0

REDHOUND_HEALTHCHECK_TIMEOUT_REDIS

Type: float Default: 1.0 Environment Variable: REDHOUND_HEALTHCHECK_TIMEOUT_REDIS

Timeout for Redis health check in seconds.

export REDHOUND_HEALTHCHECK_TIMEOUT_REDIS=1.0

REDHOUND_HEALTHCHECK_TIMEOUT_VENDORS

Type: float Default: 1.0 Environment Variable: REDHOUND_HEALTHCHECK_TIMEOUT_VENDORS

Timeout for data vendor health checks in seconds.

export REDHOUND_HEALTHCHECK_TIMEOUT_VENDORS=5.0

REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES

Type: str Default: "database,redis" Environment Variable: REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES

Comma-separated list of required dependencies. Health check returns 503 if any required dependency is unhealthy. Set to empty string to disable required dependencies.

export REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES="database,redis"
export REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES=""  # No required deps

REDHOUND_HEALTHCHECK_OPTIONAL_DEPENDENCIES

Type: str Default: "" Environment Variable: REDHOUND_HEALTHCHECK_OPTIONAL_DEPENDENCIES

Comma-separated list of optional dependencies to check (won't affect health status).

export REDHOUND_HEALTHCHECK_OPTIONAL_DEPENDENCIES="prometheus"

REDHOUND_HEALTHCHECK_VENDORS_ENABLED

Type: bool Default: False Environment Variable: REDHOUND_HEALTHCHECK_VENDORS_ENABLED

Enable health checks for data vendors (FMP, Alpha Vantage, etc.).

export REDHOUND_HEALTHCHECK_VENDORS_ENABLED=true
export REDHOUND_HEALTHCHECK_VENDORS_ENABLED=false

Database Configuration

PostgreSQL Settings

# Database connection
export POSTGRES_HOST=localhost
export POSTGRES_PORT=5432
export POSTGRES_DB=redhound
export POSTGRES_USER=redhound
export POSTGRES_PASSWORD=redhound_password

# Connection pool
export POSTGRES_POOL_SIZE=10
export POSTGRES_MAX_OVERFLOW=20

Redis Settings

Redis is used for caching stock data, technical indicators, and vendor API responses to reduce API calls by 60-80% and improve system performance.

Connection Settings

# Redis connection
export REDIS_HOST=localhost
export REDIS_PORT=6379
export REDIS_DB=0
export REDIS_PASSWORD=  # Optional (required for authenticated Redis instances)

Cache Configuration

# Enable/disable Redis caching
export REDHOUND_REDIS_CACHE_ENABLED=true  # Default: true

# Connection timeout
export REDHOUND_REDIS_SOCKET_TIMEOUT=1.0  # Default: 1.0 seconds

# TTL configuration (time-to-live in seconds)
export REDHOUND_CACHE_TTL_INTRADAY=3600      # 1 hour for recent data (<1 day old)
export REDHOUND_CACHE_TTL_HISTORICAL=86400   # 24 hours for historical data (1-30 days old)
export REDHOUND_CACHE_TTL_OLD=604800         # 7 days for old data (>30 days old)

# Cache categories (enable/disable specific cache types)
export REDHOUND_CACHE_STOCK_DATA=true         # Cache OHLCV stock data
export REDHOUND_CACHE_INDICATORS=true         # Cache technical indicators
export REDHOUND_CACHE_FUNDAMENTALS=true       # Cache fundamental data
export REDHOUND_CACHE_NEWS=true               # Cache news data
export REDHOUND_CACHE_VENDOR_RESPONSES=true   # Cache vendor API responses

Cache Key Structure

The cache uses structured keys for efficient data organization:

  • Stock Data: stock:{symbol}:{start_date}:{end_date}
  • Example: stock:AAPL:2024-01-01:2024-12-31
  • Stock Profiles: stock_profile:{symbol}
  • Example: stock_profile:AAPL — Used by the stock profile subsystem (StockProfileCache) for cached company profile data. TTL is configurable when constructing StockProfileService (default 3600 seconds) or via RedisConfig.cache_ttl (REDIS_ prefix). See Architecture — Stock Profile Subsystem.
  • Agent Analyses: agent:{agent_type}:{symbol}:latest
  • Example: agent:TECHNICAL:AAPL:latest — Used by AgentCache for the latest analysis per agent type and symbol. Connection uses the same REDIS_* variables; TTL is set when constructing AgentCache (default 1800 seconds). See Agent-Database Integration.
  • Indicators: indicator:{symbol}:{indicator_name}:{date}
  • Example: indicator:AAPL:SMA_50:2024-12-31
  • Vendor Responses: vendor:{vendor_name}:{method}:{params_hash}
  • Example: vendor:alpha_vantage:get_fundamentals:a1b2c3d4

Intelligent TTL Strategy

The cache automatically calculates TTL based on data age:

  • Recent data (<1 day old): 1 hour TTL (default)
  • Intraday data updates frequently, shorter cache lifetime
  • Historical data (1-30 days old): 24 hour TTL (default)
  • Recent historical data may still receive corrections
  • Old data (>30 days old): 7 day TTL (default)
  • Old data is stable, longer cache lifetime reduces API load

Cache Operations

Automatic Caching: Data is automatically cached when fetched through route_to_vendor():

from backend.data.interface import route_to_vendor

# First call - fetches from vendor and caches result
data = route_to_vendor("get_stock_data", symbol="AAPL", start_date="2024-01-01", end_date="2024-12-31")

# Second call - retrieves from cache (no API call)
data = route_to_vendor("get_stock_data", symbol="AAPL", start_date="2024-01-01", end_date="2024-12-31")

Manual Cache Operations:

from backend.data.cache import get_cache_client

cache = get_cache_client()

# Generate cache key
key = cache.generate_cache_key(
    data_type="stock",
    symbol="AAPL",
    start_date="2024-01-01",
    end_date="2024-12-31"
)

# Get cached data
cached_data = cache.get_dataframe(key)

# Invalidate specific cache entry
cache.invalidate(key)

# Invalidate all cache entries for a symbol
cache.invalidate_pattern("stock:AAPL:*")

Graceful Fallback

The cache automatically falls back to in-memory storage when Redis is unavailable:

  • Application continues working without Redis
  • Cache operations use local dictionary
  • Metrics track cache mode (redis vs local)
  • No TTL enforcement in fallback mode

Performance Impact

With Redis caching enabled:

  • 60-80% reduction in API calls for repeated queries
  • ~90% faster data retrieval for cached entries
  • Cost savings from reduced vendor API usage
  • Improved reliability with cached fallback during API outages

Monitoring Cache Performance

Cache metrics are automatically exported to Prometheus when metrics are enabled:

  • cache_hit_total: Total cache hits by operation and vendor
  • cache_miss_total: Total cache misses by operation and vendor
  • cache_operation_duration_seconds: Cache operation latency

View cache performance in Grafana dashboard (requires metrics enabled).

Docker Configuration

Docker Compose Environment

# Application
export APP_MODULE=redhound.api.app:app
export APP_HOST=0.0.0.0
export APP_PORT=8000
export UVICORN_WORKERS=1
export UVICORN_RELOAD=false

# Grafana
export GRAFANA_PORT=3000
export GF_SECURITY_ADMIN_USER=admin
export GF_SECURITY_ADMIN_PASSWORD=admin  # Change for production!

# Prometheus
export PROMETHEUS_PORT=9090

Configuration Examples

Development Configuration

# .env file for development
REDHOUND_MOCK_MODE=true
REDHOUND_LOG_LEVEL=DEBUG
REDHOUND_LOG_FORMAT=human-readable
REDHOUND_LOG_CONSOLE=true
REDHOUND_LOG_FILE_ENABLED=false
REDHOUND_METRICS_ENABLED=false
REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES=""

Production Configuration

# .env file for production
REDHOUND_MOCK_MODE=false
REDHOUND_LOG_LEVEL=INFO
REDHOUND_LOG_FORMAT=json
REDHOUND_LOG_CONSOLE=true
REDHOUND_LOG_FILE_ENABLED=true
REDHOUND_METRICS_ENABLED=true
REDHOUND_HEALTHCHECK_REQUIRED_DEPENDENCIES="database,redis"

# API Keys (use secrets manager in production)
OPENAI_API_KEY=sk-...
ALPHA_VANTAGE_API_KEY=...

# Database (use managed service in production)
POSTGRES_HOST=prod-db.example.com
POSTGRES_PASSWORD=<secure-password>

# Grafana (secure credentials)
GF_SECURITY_ADMIN_PASSWORD=<secure-password>

Testing Configuration

# .env.test file for testing
REDHOUND_MOCK_MODE=true
REDHOUND_LOG_LEVEL=WARNING
REDHOUND_LOG_CONSOLE=false
REDHOUND_METRICS_ENABLED=false
REDHOUND_HEALTHCHECK_ENABLED=false

Configuration Validation

Automatic Validation

The new configuration system provides automatic validation using Pydantic:

from backend.config import Config, validate_config
from pydantic import ValidationError

# Configuration is validated on creation
try:
    config = Config(
        api={"llm_provider": "invalid"}  # Invalid provider
    )
except ValidationError as e:
    print(f"Configuration error: {e}")

# Comprehensive startup validation
config = Config.from_env()
try:
    validate_config()  # Validates global config
    # Or: config.validate_on_startup()  # Validates specific instance
except ValueError as e:
    print(f"Validation failed: {e}")

What Gets Validated

The validation system checks:

  1. Type Correctness: All fields match their declared types
  2. Value Constraints: Numeric ranges, string patterns, enum values
  3. Required Fields: API keys for configured providers (unless mock mode)
  4. Directory Access: Required directories exist and are writable
  5. Provider Compatibility: Model names match selected provider
  6. Network Settings: Port numbers, timeouts, connection limits

Manual Validation (Legacy)

from backend.config.settings import DEFAULT_CONFIG

# Validate configuration
def validate_legacy_config(config: dict) -> bool:
    required_keys = [
        "max_debate_rounds",
        "llm_provider",
        "data_vendors",
    ]

    for key in required_keys:
        if key not in config:
            raise ValueError(f"Missing required config key: {key}")

    if config["max_debate_rounds"] < 1:
        raise ValueError("max_debate_rounds must be >= 1")

    if config["llm_provider"] not in ["openai", "anthropic", "google", "xai", "ollama", "openrouter"]:
        raise ValueError(f"Invalid llm_provider: {config['llm_provider']}")

    return True

# Usage
config = DEFAULT_CONFIG.copy()
validate_legacy_config(config)

Configuration Structure

The configuration system is organized into subsystems:

Base Configuration (Config)

Root configuration class containing all subsystems:

from backend.config import Config

config = Config()

# Access subsystem configs
config.api          # API and LLM settings
config.database     # PostgreSQL, Redis, pgvector
config.agents       # Agent execution settings
config.logging      # Logging configuration
config.market_data  # Market data vendors

API Configuration (APIConfig)

LLM provider and API settings:

config.api.llm_provider           # "openai", "anthropic", "google", etc.
config.api.deep_think_llm         # Model for researchers/managers
config.api.quick_think_llm        # Model for analysts
config.api.backend_url            # API endpoint URL
config.api.llm_temperature        # Sampling temperature
config.api.llm_max_tokens         # Max response tokens
config.api.api_timeout_seconds    # Request timeout

Database Configuration (DatabaseConfig)

Database connection settings:

# PostgreSQL
config.database.postgres.host
config.database.postgres.port
config.database.postgres.db
config.database.postgres.user
config.database.postgres.pool_size

# Redis
config.database.redis.host
config.database.redis.port
config.database.redis.db
config.database.redis.cache_ttl

Agent Configuration (AgentConfig)

Agent execution and orchestration:

config.agents.max_debate_rounds
config.agents.parallel_execution
config.agents.selected_analysts
config.agents.metrics_enabled
config.agents.healthcheck_enabled

Market Data Configuration (MarketDataConfig)

Data vendor selection and caching:

config.market_data.data_vendors.core_stock_apis
config.market_data.data_vendors.technical_indicators
config.market_data.data_vendors.fundamental_data
config.market_data.data_vendors.news_data
config.market_data.cache_enabled
config.market_data.cache_ttl_seconds

Logging Configuration (LoggingConfig)

Logging settings:

config.logging.level              # "DEBUG", "INFO", "WARNING", etc.
config.logging.format             # "json" or "human-readable"
config.logging.console_enabled
config.logging.file_enabled
config.logging.rotation_strategy  # "size" or "time"

Error Handling Configuration (ErrorHandlingConfig)

Retry and circuit breaker settings for external services.

# Retry settings (per operation type)
config.error_handling.default.max_attempts
config.error_handling.default.base_delay
config.error_handling.default.max_delay
config.error_handling.data_vendors.max_attempts
config.error_handling.llm_providers.max_attempts
config.error_handling.rate_limit.base_delay   # Extended backoff for HTTP 429

# Circuit breaker
config.error_handling.circuit_breaker.failure_threshold
config.error_handling.circuit_breaker.recovery_timeout
config.error_handling.circuit_breaker.success_threshold

# Timeouts (seconds)
config.error_handling.timeout_data_vendor
config.error_handling.timeout_llm_provider
config.error_handling.timeout_database

Environment variables use the nested delimiter __, e.g. REDHOUND_ERROR_HANDLING__DEFAULT__MAX_ATTEMPTS=5, REDHOUND_ERROR_HANDLING__TIMEOUT_DATA_VENDOR=30.

Configuration Best Practices

1. Use Environment-Specific Files

Create separate .env files for each environment:

.env.development    # Local development
.env.staging        # Staging environment
.env.production     # Production environment

Load the appropriate file:

from backend.config import Config

# Development
config = Config.from_env(env_file=".env.development")

# Production
config = Config.from_env(env_file=".env.production")

2. Never Commit Secrets

  • Add .env* files to .gitignore
  • Use .env.example as a template (without real values)
  • Use secrets managers in production (AWS Secrets Manager, HashiCorp Vault, etc.)

3. Validate on Startup

Always validate configuration when your application starts:

from backend.config import get_config

config = get_config()
config.validate_on_startup()  # Fail fast if config is invalid

4. Use Type-Safe Access

Prefer the new typed config over dictionary access:

# Good: Type-safe with autocomplete
config = get_config()
provider = config.api.llm_provider

# Avoid: Dictionary access (legacy)
provider = DEFAULT_CONFIG["llm_provider"]

5. Mock Mode for Development

Enable mock mode to avoid API costs during development:

# .env.development
REDHOUND_MOCK_MODE=true
REDHOUND_LOG_LEVEL=DEBUG
REDHOUND_LOG_FORMAT=human-readable

Migration Guide

Migrating from Legacy Config

If you have existing code using DEFAULT_CONFIG, you can migrate gradually:

Before (Legacy)

from backend.config.settings import DEFAULT_CONFIG

# Dictionary access
llm_provider = DEFAULT_CONFIG["llm_provider"]
max_rounds = DEFAULT_CONFIG["max_debate_rounds"]
log_level = DEFAULT_CONFIG["logging"]["level"]

After (Type-Safe)

from backend.config import get_config

config = get_config()

# Type-safe access
llm_provider = config.api.llm_provider
max_rounds = config.agents.max_debate_rounds
log_level = config.logging.level

Gradual Migration

Both approaches work simultaneously, so you can migrate incrementally:

# Old code continues to work
from backend.config.settings import DEFAULT_CONFIG

# New code uses typed config
from backend.config import get_config

config = get_config()

Quick Reference

Key Environment Variables

Variable Default Description
REDHOUND_MOCK_MODE false Enable mock mode
REDHOUND_PARALLEL_EXECUTION true Run analyst agents in parallel
REDHOUND_FUNDAMENTAL_DATA_SOURCE fmp Fundamentals analyst data source
REDHOUND_MAX_DEBATE_ROUNDS 1 Debate rounds
REDHOUND_LLM_PROVIDER openai LLM provider
REDHOUND_DEEP_THINK_LLM o4-mini Model for researchers
REDHOUND_QUICK_THINK_LLM gpt-4o-mini Model for analysts
REDHOUND_LOG_LEVEL INFO Logging level
REDHOUND_LOG_FORMAT json Log format
REDHOUND_METRICS_ENABLED false Enable metrics
OPENAI_API_KEY - OpenAI API key
ALPHA_VANTAGE_API_KEY - Alpha Vantage API key
POSTGRES_HOST localhost PostgreSQL host
REDIS_HOST localhost Redis host
REDHOUND_SIGNAL_AGGREGATION_ENABLED true Enable Step 5 signal aggregation
REDHOUND_SIGNAL_WEIGHT_TECHNICAL 0.30 Technical analyst weight (Step 5)
REDHOUND_SIGNAL_WEIGHT_FUNDAMENTAL 0.40 Fundamental analyst weight (Step 5)
REDHOUND_SIGNAL_WEIGHT_SENTIMENT 0.20 Sentiment analyst weight (Step 5)
REDHOUND_SIGNAL_WEIGHT_NEWS 0.10 News analyst weight (Step 5)
REDHOUND_BACKTESTING_ENABLED false Enable backtesting framework

Configuration Subsystems

Subsystem Module Description
Config redhound.config.base Root configuration
APIConfig redhound.config.api LLM and API settings
DatabaseConfig redhound.config.database Database connections
AgentConfig redhound.config.agents Agent execution
LoggingConfig redhound.config.logging Logging settings
MarketDataConfig redhound.config.market_data Data vendors

See sections above for complete configuration options.


Signal Aggregation (Step 5)

Module: redhound.orchestration.signal_aggregator

Unified scoring: aggregates analyst signals into a single weighted recommendation with confidence. Config keys: signal_aggregation.enabled, signal_aggregation.weights (technical, fundamental, sentiment, news), signal_aggregation.regime_modifiers, signal_aggregation.confidence_thresholds, signal_aggregation.conflict_resolution.

Environment variables: REDHOUND_SIGNAL_AGGREGATION_ENABLED, REDHOUND_SIGNAL_WEIGHT_TECHNICAL, REDHOUND_SIGNAL_WEIGHT_FUNDAMENTAL, REDHOUND_SIGNAL_WEIGHT_SENTIMENT, REDHOUND_SIGNAL_WEIGHT_NEWS.

See Signal Aggregation for behavior, formulas, and usage.


Backtesting

Module: redhound.services.backtesting

Historical validation and weight optimization for the signal aggregator. Config keys: backtesting.enabled, backtesting.holding_period_days, backtesting.optimization_metric, backtesting.weight_ranges, backtesting.grid_step.

Environment variables: REDHOUND_BACKTESTING_ENABLED, REDHOUND_BACKTESTING_HOLDING_PERIOD_DAYS, REDHOUND_BACKTESTING_OPTIMIZATION_METRIC.

See Signal Aggregation - Backtesting for usage.


Next Steps