Configuration
Configure Copinance OS for your needs.
Environment Variables
Configuration is done through environment variables or a .env file.
Creating .env File
Create a .env file in your project root (same directory as pyproject.toml):
# .env file
COPINANCEOS_GEMINI_API_KEY=your-api-key-hereLLM Provider Setup
Copinance OS supports multiple LLM providers for agentic workflows. You can use either Gemini (cloud-based) or Ollama (local).
Gemini API (Cloud-based)
Recommended for most users. Requires an API key.
1. Get API Key
- Go to Google AI Studio
- Sign in with your Google account
- Click “Create API Key”
- Copy your API key
2. Configure
Option A: .env file (Recommended)
COPINANCEOS_GEMINI_API_KEY=your-api-key-hereOption B: Environment Variable
export COPINANCEOS_GEMINI_API_KEY=your-api-key-here3. Verify
copinance research ask "What is the current price?" --symbol AAPLIf configured correctly, you’ll see AI analysis. If not, check:
.envfile is in the project root- Variable name is exactly
COPINANCEOS_GEMINI_API_KEY - No extra spaces around the
=sign
Model Selection
Choose which Gemini model to use:
# Default (most capable)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# Latest and fastest
COPINANCEOS_GEMINI_MODEL=gemini-2.5-flash
# Faster alternative
COPINANCEOS_GEMINI_MODEL=gemini-1.5-flashDefault: gemini-1.5-pro (most capable). Use gemini-2.5-flash for faster responses.
Storage Configuration
Configure where data is stored:
# Storage type: 'memory' or 'file'
COPINANCEOS_STORAGE_TYPE=file
# Storage path (for file storage)
COPINANCEOS_STORAGE_PATH=~/.copinanceosComplete .env Example
Using Gemini (Cloud):
# LLM Provider (optional, default: gemini)
COPINANCEOS_LLM_PROVIDER=gemini
# Gemini API (required for agentic workflows with Gemini)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
# Model selection (optional, default: gemini-1.5-pro)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceosUsing Ollama (Local):
# LLM Provider
COPINANCEOS_LLM_PROVIDER=ollama
# Ollama configuration (optional, defaults shown)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceosMixed Configuration (per-workflow):
# Use different providers for different workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=static:ollama,agentic:gemini
# Gemini API (for agentic workflows)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
# Ollama (for static workflows)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2Security
⚠️ Never commit your .env file to version control!
Make sure .env is in your .gitignore:
.env
.env.localTroubleshooting
”LLM analyzer not configured”
- Check
.envfile location (should be in project root) - Verify variable name is correct
- Restart terminal after setting environment variables
”Gemini not available”
Install the required package:
pip install google-genai“Gemini API key is not configured”
- Verify API key is set correctly
- Check for typos in variable name
- Ensure no quotes around the key value
Ollama Setup (Local LLM)
Use Ollama to run LLMs locally without API keys. Great for privacy and cost savings.
1. Install Ollama
- Download from ollama.ai
- Install and start the Ollama service
- Pull a model:
ollama pull llama2(ormistral,codellama, etc.)
2. Configure
Option A: .env file (Recommended)
# Set Ollama as the default provider
COPINANCEOS_LLM_PROVIDER=ollama
# Optional: Customize Ollama settings
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2Option B: Environment Variable
export COPINANCEOS_LLM_PROVIDER=ollama
export COPINANCEOS_OLLAMA_MODEL=llama23. Verify
copinance research ask "What is the current price?" --symbol AAPLPer-Workflow Provider Configuration
You can use different providers for different workflows:
# Use Ollama for static workflows, Gemini for agentic
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=static:ollama,agentic:geminiThis allows you to use local models for simple tasks and cloud models for complex analysis.