Configuration
Configure Copinance OS for your needs.
Configuration Methods
Copinance OS supports two ways to configure LLM settings:
- CLI Usage: Environment variables (for command-line usage)
- Library Integration:
LLMConfigobject (for programmatic usage)
Library Integration: When integrating Copinance OS as a library, you must provide an
LLMConfigobject directly. Environment variables only work for CLI usage. See Library Integration below.
LLMConfig Dataclass (Library Integration)
For programmatic usage, you must provide an LLMConfig dataclass instance when creating containers. This replaces the previous environment variable approach for library integrators.
from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
# Create LLM configuration
llm_config = LLMConfig(
provider="gemini", # Required: "gemini", "ollama", "openai", "anthropic"
api_key="your-api-key", # Required for cloud providers
model="gemini-1.5-pro", # Optional: defaults vary by provider
temperature=0.7, # Optional: 0.0-1.0, default 0.7
max_tokens=4096, # Optional: provider-dependent
base_url=None, # Optional: for custom endpoints
workflow_providers={}, # Optional: per-workflow provider mapping
provider_config={}, # Optional: provider-specific settings
)Required Parameters:
provider: The LLM provider name
Provider-Specific Requirements:
- Gemini/OpenAI/Anthropic:
api_keyis required - Ollama:
modelshould be set to your local model name (e.g., “llama2”) - Custom providers: May require
base_urlfor local/custom endpoints
Environment Variables (CLI Usage)
For CLI usage, configuration is done through environment variables or a .env file.
Creating .env File
Create a .env file in your project root (same directory as pyproject.toml):
# .env file
COPINANCEOS_GEMINI_API_KEY=your-api-key-hereLLM Provider Setup
Copinance OS supports multiple LLM providers for agent workflows. You can use either Gemini (cloud-based) or Ollama (local).
Gemini API (Cloud-based)
Recommended for most users. Requires an API key.
1. Get API Key
- Go to Google AI Studio
- Sign in with your Google account
- Click “Create API Key”
- Copy your API key
2. Configure
Option A: .env file (Recommended)
COPINANCEOS_GEMINI_API_KEY=your-api-key-hereOption B: Environment Variable
export COPINANCEOS_GEMINI_API_KEY=your-api-key-here3. Verify
copinance ask "What is the current price?" --symbol AAPLIf configured correctly, you’ll see AI analysis. If not, check:
.envfile is in the project root- Variable name is exactly
COPINANCEOS_GEMINI_API_KEY - No extra spaces around the
=sign
Model Selection
Choose which Gemini model to use:
# Default (most capable)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# Latest and fastest
COPINANCEOS_GEMINI_MODEL=gemini-2.5-flash
# Faster alternative
COPINANCEOS_GEMINI_MODEL=gemini-1.5-flashDefault: gemini-1.5-pro (most capable). Use gemini-2.5-flash for faster responses.
FRED API Setup (Macroeconomic Data)
FRED (Federal Reserve Economic Data) provides high-quality macroeconomic time series for rates, credit spreads, and commodities. While the macro workflow works with yfinance proxies, FRED provides more accurate data.
1. Get API Key
- Go to FRED API
- Sign up for a free account
- Request an API key (free, no credit card required)
- Copy your API key
2. Configure
Option A: .env file (Recommended)
COPINANCEOS_FRED_API_KEY=your-fred-api-key-hereOption B: Environment Variable
export COPINANCEOS_FRED_API_KEY=your-fred-api-key-here3. Verify
copinance analyze macroIf configured correctly, you’ll see "source": "fred" in the results. If not, check:
.envfile is in the project root- Variable name is exactly
COPINANCEOS_FRED_API_KEY - No extra spaces around the
=sign
Note: The macro workflow will automatically fall back to yfinance proxies if FRED is unavailable, so it works without an API key (with lower data quality).
Storage Configuration
Configure where data is stored:
# Storage type: 'memory' or 'file'
COPINANCEOS_STORAGE_TYPE=file
# Storage path (for file storage)
COPINANCEOS_STORAGE_PATH=~/.copinanceosComplete .env Example
Using Gemini (Cloud):
# LLM Provider (optional, default: gemini)
COPINANCEOS_LLM_PROVIDER=gemini
# Gemini API (required for agent workflows with Gemini)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
# Model selection (optional, default: gemini-1.5-pro)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# FRED API (optional, for high-quality macro data)
COPINANCEOS_FRED_API_KEY=your-fred-api-key
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceosUsing Ollama (Local):
# LLM Provider
COPINANCEOS_LLM_PROVIDER=ollama
# Ollama configuration (optional, defaults shown)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceosMixed Configuration (per-workflow):
# Use different providers for different workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=agent:gemini
# Gemini API (for agent workflows)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# Ollama (for agent workflows)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2Security
⚠️ Never commit your .env file to version control!
Make sure .env is in your .gitignore:
.env
.env.localTroubleshooting
”LLM analyzer not configured”
- Check
.envfile location (should be in project root) - Verify variable name is correct
- Restart terminal after setting environment variables
”Gemini not available”
Install the required package:
pip install google-genai“Gemini API key is not configured”
- Verify API key is set correctly
- Check for typos in variable name
- Ensure no quotes around the key value
Ollama Setup (Local LLM)
Use Ollama to run LLMs locally without API keys. Great for privacy and cost savings.
1. Install Ollama
- Download from ollama.ai
- Install and start the Ollama service
- Pull a model:
ollama pull llama2(ormistral,codellama, etc.)
2. Configure
Option A: .env file (Recommended)
# Set Ollama as the default provider
COPINANCEOS_LLM_PROVIDER=ollama
# Optional: Customize Ollama settings
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2Option B: Environment Variable
export COPINANCEOS_LLM_PROVIDER=ollama
export COPINANCEOS_OLLAMA_MODEL=llama23. Verify
copinance ask "What is the current price?" --symbol AAPLPer-Workflow Provider Configuration
You can use different providers for different workflows:
# Use Ollama locally for agent workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=agent:ollamaThis allows you to use local models for simple tasks and cloud models for complex analysis.
Library Integration
Library integrators must provide LLMConfig when creating containers. Environment variables only work for CLI usage.
Basic LLMConfig Usage
from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
from copinanceos.infrastructure.containers import get_container
# Create LLM configuration (REQUIRED for library integration)
llm_config = LLMConfig(
provider="gemini",
api_key="your-api-key", # Required for Gemini
model="gemini-1.5-pro", # Optional, defaults to provider default
)
# Create container with LLM config (REQUIRED parameter)
container = get_container(llm_config=llm_config)
# Use the container
use_case = container.get_stock_use_case()
# ... integrate into your applicationPer-Workflow Provider Configuration
You can configure different providers for different workflows:
llm_config = LLMConfig(
provider="gemini", # Default provider
api_key="your-gemini-key",
workflow_providers={
"agent": "gemini", # Use Gemini for agent workflows
},
)Direct Provider Creation
You can also create providers directly:
from copinanceos.infrastructure.analyzers.llm.config import LLMConfig
from copinanceos.infrastructure.analyzers.llm.providers.factory import LLMProviderFactory
from copinanceos.infrastructure.factories.llm_analyzer import LLMAnalyzerFactory
# Create LLM config
llm_config = LLMConfig(
provider="gemini",
api_key="your-api-key",
model="gemini-1.5-pro",
)
# Create provider
provider = LLMProviderFactory.create_provider("gemini", llm_config=llm_config)
# Create analyzer
analyzer = LLMAnalyzerFactory.create("gemini", llm_config=llm_config)Providing FRED API Key
For library integrators, you can provide your own FRED API key when creating the container:
from copinanceos.infrastructure.containers import get_container
# Create container with your FRED API key
container = get_container(fred_api_key="your-fred-api-key")
# Use the container
macro_provider = container.macro_data_provider()This allows you to:
- Manage API keys in your own configuration system
- Use different keys for different environments
- Avoid relying on environment variables
Why Use LLMConfig?
- Security: API keys are not stored in environment variables
- Flexibility: Different configurations for different parts of your application
- Testability: Easy to mock and test with different configurations
- Type Safety: Full type checking support with dataclasses