User GuideConfiguration

Configuration

Configure Copinance OS for your needs.

Environment Variables

Configuration is done through environment variables or a .env file.

Creating .env File

Create a .env file in your project root (same directory as pyproject.toml):

# .env file
COPINANCEOS_GEMINI_API_KEY=your-api-key-here

LLM Provider Setup

Copinance OS supports multiple LLM providers for agentic workflows. You can use either Gemini (cloud-based) or Ollama (local).

Gemini API (Cloud-based)

Recommended for most users. Requires an API key.

1. Get API Key

  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Click “Create API Key”
  4. Copy your API key

2. Configure

Option A: .env file (Recommended)

COPINANCEOS_GEMINI_API_KEY=your-api-key-here

Option B: Environment Variable

export COPINANCEOS_GEMINI_API_KEY=your-api-key-here

3. Verify

copinance research ask "What is the current price?" --symbol AAPL

If configured correctly, you’ll see AI analysis. If not, check:

  • .env file is in the project root
  • Variable name is exactly COPINANCEOS_GEMINI_API_KEY
  • No extra spaces around the = sign

Model Selection

Choose which Gemini model to use:

# Default (most capable)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# Latest and fastest
COPINANCEOS_GEMINI_MODEL=gemini-2.5-flash
 
# Faster alternative
COPINANCEOS_GEMINI_MODEL=gemini-1.5-flash

Default: gemini-1.5-pro (most capable). Use gemini-2.5-flash for faster responses.

Storage Configuration

Configure where data is stored:

# Storage type: 'memory' or 'file'
COPINANCEOS_STORAGE_TYPE=file
 
# Storage path (for file storage)
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Complete .env Example

Using Gemini (Cloud):

# LLM Provider (optional, default: gemini)
COPINANCEOS_LLM_PROVIDER=gemini
 
# Gemini API (required for agentic workflows with Gemini)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
 
# Model selection (optional, default: gemini-1.5-pro)
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Using Ollama (Local):

# LLM Provider
COPINANCEOS_LLM_PROVIDER=ollama
 
# Ollama configuration (optional, defaults shown)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2
 
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceos

Mixed Configuration (per-workflow):

# Use different providers for different workflows
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=static:ollama,agentic:gemini
 
# Gemini API (for agentic workflows)
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-actual-key
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
 
# Ollama (for static workflows)
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2

Security

⚠️ Never commit your .env file to version control!

Make sure .env is in your .gitignore:

.env
.env.local

Troubleshooting

”LLM analyzer not configured”

  • Check .env file location (should be in project root)
  • Verify variable name is correct
  • Restart terminal after setting environment variables

”Gemini not available”

Install the required package:

pip install google-genai

“Gemini API key is not configured”

  • Verify API key is set correctly
  • Check for typos in variable name
  • Ensure no quotes around the key value

Ollama Setup (Local LLM)

Use Ollama to run LLMs locally without API keys. Great for privacy and cost savings.

1. Install Ollama

  1. Download from ollama.ai
  2. Install and start the Ollama service
  3. Pull a model: ollama pull llama2 (or mistral, codellama, etc.)

2. Configure

Option A: .env file (Recommended)

# Set Ollama as the default provider
COPINANCEOS_LLM_PROVIDER=ollama
 
# Optional: Customize Ollama settings
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama2

Option B: Environment Variable

export COPINANCEOS_LLM_PROVIDER=ollama
export COPINANCEOS_OLLAMA_MODEL=llama2

3. Verify

copinance research ask "What is the current price?" --symbol AAPL

Per-Workflow Provider Configuration

You can use different providers for different workflows:

# Use Ollama for static workflows, Gemini for agentic
COPINANCEOS_WORKFLOW_LLM_PROVIDERS=static:ollama,agentic:gemini

This allows you to use local models for simple tasks and cloud models for complex analysis.