Configuration
Configure settings (LLM, FRED, storage, cache, BSM inputs) for CLI via environment variables / .env, or for library use by passing objects into get_container(). Configuration does not change domain math; it selects providers, keys, and paths only.
Configuration Methods
- CLI: Environment variables or a
.envfile. - Library: Pass
LLMConfigand optionalfred_api_keyintoget_container(). See Library Integration below.
Environment Variables (CLI)
Create a .env file in your project root (same directory as pyproject.toml):
# .env
COPINANCEOS_GEMINI_API_KEY=your-api-key-hereLLM Provider Setup
Implemented backends: gemini, openai (Chat Completions), and ollama. The factory rejects unknown provider names (see Using as a Library — LLMConfig).
Gemini (Cloud)
- Get an API key from Google AI Studio.
- Set in
.env:COPINANCEOS_GEMINI_API_KEY=your-key - Verify:
copinance analyze equity AAPL --question "What is the current price?"
Model selection (optional):
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro # default
COPINANCEOS_GEMINI_MODEL=gemini-2.5-flash # fasterText streaming (CLI): For question-driven runs, pass --stream on the analyze group (before the subcommand) or --stream on generic research (copinance --stream "…"). --json disables streaming. Output prints to stdout as tokens arrive; run metadata and saved JSON still appear after. See Using as a Library — LLM text streaming for LLMConfig and programmatic use.
OpenAI (Cloud)
- Get an API key from OpenAI (or use a base URL for an OpenAI-compatible HTTP API).
- Set in
.env:
COPINANCEOS_LLM_PROVIDER=openai
COPINANCEOS_OPENAI_API_KEY=sk-...
COPINANCEOS_OPENAI_MODEL=gpt-4o-mini
# Optional — custom / enterprise endpoint:
# COPINANCEOS_OPENAI_BASE_URL=https://api.openai.com/v1- Verify:
copinance analyze equity AAPL --question "What is the current price?"
Ollama (Local)
- Install from ollama.ai and run
ollama pullfor a model you have locally (e.g.llama3.2orllama3.1). - Set in
.env:
COPINANCEOS_LLM_PROVIDER=ollama
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama3.2- Verify:
copinance analyze equity AAPL --question "What is the current price?"
FRED API (Macro Data)
Optional. Improves macro analysis; yfinance fallback is used without it.
- Get a free key at FRED API.
- Set:
COPINANCEOS_FRED_API_KEY=your-fred-api-key - Verify:
copinance analyze macro(results should show"source": "fred"where applicable).
SEC EDGAR (edgartools)
Question-driven analysis routes SEC filing metadata and filing body tools to EdgarToolsFundamentalProvider (copinance_os.data.providers.sec.edgartools), built on edgartools (import name edgar). The SEC requires a User-Agent identity (name and email) for programmatic access.
Configure identity (pick one):
- Environment:
EDGAR_IDENTITY— e.g.Your Name you@example.com(common with edgartools docs). - Copinance-prefixed:
COPINANCEOS_EDGAR_IDENTITY— same format. - Default: If unset, settings use a built-in project identity so local runs work; override in production with your own contact string.
Responses are cached under the same cache as other tools (see below), with per-operation TTLs (e.g. filing lists vs. filing text) to limit repeat requests to SEC servers.
Option Greek estimation (BSM)
Optional. Affects analytic delta/gamma/theta/vega/rho attached to options chains when QuantLib is installed.
# Annual risk-free rate (decimal e.g. 0.045). Omit to use built-in default.
COPINANCEOS_OPTION_GREEKS_RISK_FREE_RATE=0.045
# Default dividend yield when chain metadata has no `dividend_yield`. Omit for 0.
COPINANCEOS_OPTION_GREEKS_DIVIDEND_YIELD_DEFAULT=0Reserved OptionsChain.metadata keys and profile preferences are documented in Options chain metadata.
Storage and Cache
# Storage (optional)
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinance
# Cache (optional, default: true)
COPINANCEOS_CACHE_ENABLED=true- Library: Use
get_container(..., cache_enabled=False)orcache_manager=.... See Using as a Library.
SEC / edgartools: Filing metadata and filing content from EDGAR are stored in the same file cache as tool results (versioned under your persistence/cache paths). Disabling cache or using a custom cache_manager applies to EDGAR-backed calls as well.
Complete .env Examples
Gemini:
COPINANCEOS_LLM_PROVIDER=gemini
COPINANCEOS_GEMINI_API_KEY=AIzaSy...your-key
COPINANCEOS_GEMINI_MODEL=gemini-1.5-pro
COPINANCEOS_FRED_API_KEY=your-fred-key
# Optional — SEC/EDGAR (edgartools); override default identity for production
# EDGAR_IDENTITY="Your Name you@company.com"
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceOpenAI:
COPINANCEOS_LLM_PROVIDER=openai
COPINANCEOS_OPENAI_API_KEY=sk-...
COPINANCEOS_OPENAI_MODEL=gpt-4o-mini
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceOllama:
COPINANCEOS_LLM_PROVIDER=ollama
COPINANCEOS_OLLAMA_BASE_URL=http://localhost:11434
COPINANCEOS_OLLAMA_MODEL=llama3.2
COPINANCEOS_STORAGE_TYPE=file
COPINANCEOS_STORAGE_PATH=~/.copinanceSecurity
Never commit .env. Add to .gitignore:
.env
.env.localTroubleshooting
- “LLM analyzer not configured”: Check
.envlocation and variable names; restart terminal. - “Gemini not available”:
pip install google-genai - “Gemini API key is not configured”: No quotes around the key; exact name
COPINANCEOS_GEMINI_API_KEY. - OpenAI errors / “openai package is not installed”: Ensure project deps are installed (
pip install -e .); theopenailibrary is declared inpyproject.toml. CheckCOPINANCEOS_OPENAI_API_KEYwhen using env-based CLI config.
Library Integration
When using Copinance OS as a library, pass config into get_container(); env vars are for CLI only.
- LLMConfig: Required for question-driven analysis. Example:
get_container(llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro")). - FRED: Optional.
get_container(..., fred_api_key="your-key"). - Storage: To avoid creating a
.copinancedirectory on disk, useget_container(..., storage_type="memory")or setCOPINANCEOS_STORAGE_TYPE=memoryin env. See Storage and Persistence.
Full container options and examples: Using as a Library.