User GuideConfiguration

Configuration

Configure Copinance OS for your needs.

Configuration Methods

Copinance OS supports two ways to configure LLM settings:

  1. CLI Usage: Environment variables (for command-line usage)
  2. Library Integration: LLMConfig object (for programmatic usage)

Library Integration: When integrating Copinance OS as a library, you must provide an LLMConfig object directly. Environment variables only work for CLI usage. See Library Integration below.

LLMConfig Dataclass (Library Integration)

For programmatic usage, you must provide an LLMConfig dataclass instance when creating containers. This replaces the previous environment variable approach for library integrators.

from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
 
# Create LLM configuration
llm_config = LLMConfig(
    provider="gemini",           # Required: "gemini", "ollama", "openai", "anthropic"
    api_key="your-api-key",      # Required for cloud providers
    model="gemini-1.5-pro",      # Optional: defaults vary by provider
    temperature=0.7,             # Optional: 0.0-1.0, default 0.7
    max_tokens=4096,             # Optional: provider-dependent
    base_url=None,               # Optional: for custom endpoints
    workflow_providers={},       # Optional: per-workflow provider mapping
    provider_config={},          # Optional: provider-specific settings
)

Required Parameters:

  • provider: The LLM provider name

Provider-Specific Requirements:

  • Gemini/OpenAI/Anthropic: api_key is required
  • Ollama: model should be set to your local model name (e.g., “llama2”)
  • Custom providers: May require base_url for local/custom endpoints

Environment Variables (CLI Usage)

For CLI usage, configuration is done through environment variables or a .env file.

Creating .env File

Create a .env file in your project root (same directory as pyproject.toml):

# .env file
COPINANCEOS_GEMINI_API_KEY=your-api-key-here

LLM Provider Setup

Copinance OS supports multiple LLM providers for agent workflows. You can use either Gemini (cloud-based) or Ollama (local).

Gemini API (Cloud-based)

Recommended for most users. Requires an API key.

1. Get API Key

  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Click “Create API Key”
  4. Copy your API key

2. Configure

Option A: .env file (Recommended)

COPINANCEOS_GEMINI_API_KEY=your-api-key-here

Option B: Environment Variable

export COPINANCEOS_GEMINI_API_KEY=your-api-key-here

3. Verify

copinance ask "What is the current price?" --symbol AAPL

If configured correctly, you’ll see AI analysis. If not, check:

  • .env file is in the project root
  • Variable name is exactly COPINANCEOS_GEMINI_API_KEY
  • No extra spaces around the = sign

Model Selection

Choose which Gemini model to use:

# Default (most capable)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# Latest and fastest
COPINANCEOS_GEMINI_MODEL=gemini-2.5-flash
 
# Faster alternative
COPINANCEOS_GEMINI_MODEL=gemini-1.5-flash

Default: gemini-1.5-pro (most capable). Use gemini-2.5-flash for faster responses.

FRED API Setup (Macroeconomic Data)

FRED (Federal Reserve Economic Data) provides high-quality macroeconomic time series for rates, credit spreads, and commodities. While the macro workflow works with yfinance proxies, FRED provides more accurate data.

1. Get API Key

  1. Go to FRED API
  2. Sign up for a free account
  3. Request an API key (free, no credit card required)
  4. Copy your API key

2. Configure

Option A: .env file (Recommended)

COPINANCEOS_FRED_API_KEY=your-fred-api-key-here

Option B: Environment Variable

export COPINANCEOS_FRED_API_KEY=your-fred-api-key-here

3. Verify

copinance analyze macro

If configured correctly, you’ll see "source": "fred" in the results. If not, check:

  • .env file is in the project root
  • Variable name is exactly COPINANCEOS_FRED_API_KEY
  • No extra spaces around the = sign

Note: The macro workflow will automatically fall back to yfinance proxies if FRED is unavailable, so it works without an API key (with lower data quality).

Storage Configuration

Configure where data is stored:

# Storage type: 'memory' or 'file'
COPINANCEOS_STORAGE_TYPE=file
 
# Storage path (for file storage)
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Complete .env Example

Using Gemini (Cloud):

# LLM Provider (optional, default: gemini)
COPINANCEOS_LLM_PROVIDER=gemini
 
# Gemini API (required for agent workflows with Gemini)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
 
# Model selection (optional, default: gemini-1.5-pro)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# FRED API (optional, for high-quality macro data)
COPINANCEOS_FRED_API_KEY=your-fred-api-key
 
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Using Ollama (Local):

# LLM Provider
COPINANCEOS_LLM_PROVIDER=ollama
 
# Ollama configuration (optional, defaults shown)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2
 
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Mixed Configuration (per-workflow):

# Use different providers for different workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=agent:gemini
 
# Gemini API (for agent workflows)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# Ollama (for agent workflows)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2

Security

⚠️ Never commit your .env file to version control!

Make sure .env is in your .gitignore:

.env
.env.local

Troubleshooting

”LLM analyzer not configured”

  • Check .env file location (should be in project root)
  • Verify variable name is correct
  • Restart terminal after setting environment variables

”Gemini not available”

Install the required package:

pip install google-genai

“Gemini API key is not configured”

  • Verify API key is set correctly
  • Check for typos in variable name
  • Ensure no quotes around the key value

Ollama Setup (Local LLM)

Use Ollama to run LLMs locally without API keys. Great for privacy and cost savings.

1. Install Ollama

  1. Download from ollama.ai
  2. Install and start the Ollama service
  3. Pull a model: ollama pull llama2 (or mistral, codellama, etc.)

2. Configure

Option A: .env file (Recommended)

# Set Ollama as the default provider
COPINANCEOS_LLM_PROVIDER=ollama
 
# Optional: Customize Ollama settings
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2

Option B: Environment Variable

export COPINANCEOS_LLM_PROVIDER=ollama
export COPINANCEOS_OLLAMA_MODEL=llama2

3. Verify

copinance ask "What is the current price?" --symbol AAPL

Per-Workflow Provider Configuration

You can use different providers for different workflows:

# Use Ollama locally for agent workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=agent:ollama

This allows you to use local models for simple tasks and cloud models for complex analysis.

Library Integration

Library integrators must provide LLMConfig when creating containers. Environment variables only work for CLI usage.

Basic LLMConfig Usage

from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
from copinanceos.infrastructure.containers import get_container
 
# Create LLM configuration (REQUIRED for library integration)
llm_config = LLMConfig(
    provider="gemini",
    api_key="your-api-key",      # Required for Gemini
    model="gemini-1.5-pro",      # Optional, defaults to provider default
)
 
# Create container with LLM config (REQUIRED parameter)
container = get_container(llm_config=llm_config)
 
# Use the container
use_case = container.get_stock_use_case()
# ... integrate into your application

Per-Workflow Provider Configuration

You can configure different providers for different workflows:

llm_config = LLMConfig(
    provider="gemini",  # Default provider
    api_key="your-gemini-key",
    workflow_providers={
        "agent": "gemini",     # Use Gemini for agent workflows
    },
)

Direct Provider Creation

You can also create providers directly:

from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
from copinanceos.infrastructure.analyzers.llm.providers.factory import LLMProviderFactory
from copinanceos.infrastructure.factories.llm_analyzer import LLMAnalyzerFactory
 
# Create LLM config
llm_config = LLMConfig(
    provider="gemini",
    api_key="your-api-key",
    model="gemini-1.5-pro",
)
 
# Create provider
provider = LLMProviderFactory.create_provider("gemini", llm_config=llm_config)
 
# Create analyzer
analyzer = LLMAnalyzerFactory.create("gemini", llm_config=llm_config)

Providing FRED API Key

For library integrators, you can provide your own FRED API key when creating the container:

from copinanceos.infrastructure.containers import get_container
 
# Create container with your FRED API key
container = get_container(fred_api_key="your-fred-api-key")
 
# Use the container
macro_provider = container.macro_data_provider()

This allows you to:

  • Manage API keys in your own configuration system
  • Use different keys for different environments
  • Avoid relying on environment variables

Why Use LLMConfig?

  • Security: API keys are not stored in environment variables
  • Flexibility: Different configurations for different parts of your application
  • Testability: Easy to mock and test with different configurations
  • Type Safety: Full type checking support with dataclasses