Extending Copinance OS
Add custom data providers, LLM adapters, executors, and strategies. Keep I/O and vendor code in data and infra; keep contracts in domain.ports; register replacements through get_container() overrides so ResearchOrchestrator and CLI paths stay consistent.
Adding a Custom Data Provider
Step 1: Choose Interface
Select the appropriate interface from copinance_os.domain.ports.data_providers:
MarketDataProvider- Market data (quotes, historical prices)FundamentalDataProvider- Financial statements, SEC filings (reference implementation for EDGAR:copinance_os.data.providers.sec.edgartools)AlternativeDataProvider- Sentiment, web traffic, alternative dataMacroeconomicDataProvider- Economic indicators
Step 2: Implement the Interface
from copinance_os.domain.ports.data_providers import MarketDataProvider
from copinance_os.domain.models.market import MarketDataPoint, OptionsChain
from datetime import datetime
from typing import Any
import httpx
class AlphaVantageProvider(MarketDataProvider):
"""Alpha Vantage market data provider."""
def __init__(self, api_key: str):
self.api_key = api_key
self._client = httpx.AsyncClient()
async def is_available(self) -> bool:
"""Check if provider is available."""
try:
# Test API connection
return True
except Exception:
return False
def get_provider_name(self) -> str:
return "alpha_vantage"
async def get_quote(self, symbol: str) -> dict[str, Any]:
"""Get current quote."""
response = await self._client.get(
"https://www.alphavantage.co/query",
params={
"function": "GLOBAL_QUOTE",
"symbol": symbol,
"apikey": self.api_key,
}
)
data = response.json()
quote = data.get("Global Quote", {})
return {
"symbol": symbol,
"current_price": float(quote.get("05. price", "0")),
"volume": int(quote.get("06. volume", "0")),
# ... map other fields
}
async def get_historical_data(
self,
symbol: str,
start_date: datetime,
end_date: datetime,
interval: str = "1d",
) -> list[MarketDataPoint]:
"""Get historical data."""
# Implementation here
# Convert API response to list[MarketDataPoint]
return []
async def get_intraday_data(
self,
symbol: str,
interval: str = "1min",
) -> list[MarketDataPoint]:
"""Get intraday data."""
return []
async def search_instruments(self, query: str, limit: int = 10) -> list[dict[str, Any]]:
"""Search for instruments."""
return []
async def get_options_chain(
self, underlying_symbol: str, expiration_date: str | None = None
) -> OptionsChain:
"""Get an options chain. Return an OptionsChain instance, not a list."""
from datetime import date
exp = date.fromisoformat(expiration_date) if expiration_date else date.today()
return OptionsChain(
underlying_symbol=underlying_symbol,
expiration_date=exp,
available_expirations=[],
calls=[],
puts=[],
)Step 3: Register in Container
from copinance_os.infra.di import get_container
from dependency_injector import providers
# Use get_container(llm_config=...) if you need question-driven analysis
container = get_container()
container.market_data_provider.override(
providers.Singleton(AlphaVantageProvider, api_key="your-key")
)Step 4: Use in Analysis
The provider will automatically be used by analysis that needs market data.
Adding a Custom Analyzer
1. Implement the Interface
from copinance_os.domain.ports.analyzers import LLMAnalyzer
from typing import Any
class MyCustomAnalyzer(LLMAnalyzer):
"""Custom LLM analyzer."""
async def analyze(self, prompt: str, context: dict[str, Any]) -> str:
"""Analyze using custom logic."""
# Your implementation
return "Analysis result"2. Register in Container
Similar to data providers, override the analyzer in your container:
from copinance_os.infra.di import get_container
from copinance_os.ai.llm.config import LLMConfig
from dependency_injector import providers
container = get_container(llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro"))
container.llm_analyzer.override(
providers.Singleton(MyCustomAnalyzer)
)Note: If you’re using LLM features, make sure to provide LLMConfig when creating the container. See Configuration for details.
LLM backends: text streaming
Built-in GeminiProvider, OpenAIProvider, and OllamaProvider subclass LLMProvider and implement _iter_native_text_stream where the remote API supports streaming. supports_native_text_stream() respects disable_native_text_stream. When adding a new vendor adapter, either:
- Implement
_iter_native_text_streamand returnTruefromsupports_native_text_stream()when the remote API supports streaming, or - Rely on the base
generate_text_streamlogic:auto/bufferedwill callgenerate_textonly.
Events are LLMTextStreamEvent (Pydantic) with kind, text_delta, native_streaming, and optional usage. See Library — LLM text streaming.
Adding a Custom Executor
1. Extend BaseAnalysisExecutor
from copinance_os.core.execution_engine.base import BaseAnalysisExecutor
from copinance_os.domain.models.job import Job
from typing import Any
class MyCustomExecutor(BaseAnalysisExecutor):
"""Custom analysis executor."""
def get_executor_id(self) -> str:
# Becomes results["execution_type"] in the payload; register a matching job.execution_type in your JobRunner.
return "custom_analysis"
async def validate(self, job: Job) -> bool:
return job.execution_type == "custom_analysis"
async def _execute_analysis(
self, job: Job, context: dict[str, Any]
) -> dict[str, Any]:
# Your implementation
return {"status": "completed", "results": {}}2. Register in Container
Stock wiring builds the executor list with AnalysisExecutorFactory.create_all in copinance_os.core.execution_engine.factory. For a one-off integration, override container.analysis_executors with your own providers.Factory / list that includes MyCustomExecutor, or fork the factory locally. Jobs are run via the JobRunner port (default: finds an executor and runs it). For custom global orchestration (queues, retries), implement JobRunner and pass it to ResearchOrchestrator(job_runner=...)—the default Container does not expose job_runner() as a method.
Custom prompt templates
Question-driven (and other LLM) analysis resolves prompts through a PromptManager. When using the library you can supply your own templates; if you do not, built-in package prompts are used.
Template format: Each template is an object with two string fields: system_prompt and user_prompt. Use {variable_name} placeholders for substitution; escape literal braces as {{ and }}. See Prompt template format in the library guide for the full spec and question-driven prompt variables.
- Overlay via
get_container(): Passprompt_templates={ "analyze_question_driven": { "system_prompt": "...", "user_prompt": "..." } }. Only the keys you provide are overridden; the rest use package defaults. - Custom
PromptManager: Passprompt_manager=PromptManager(templates=...)orPromptManager(resources_dir=Path("..."))toget_container(). Withresources_dir, use one JSON file per prompt (e.g.analyze_question_driven.json) containingsystem_promptanduser_promptstrings. - Prompt name: The built-in question-driven analyze flow uses the name
analyze_question_driven; importANALYZE_QUESTION_DRIVEN_PROMPT_NAMEfromcopinance_os.ai.llm.resources.
Cache
The default container uses a file-based cache for tool results and agent prompts. When using the library you can disable it or supply your own:
- Disable:
get_container(..., cache_enabled=False). - Custom cache:
get_container(..., cache_manager=my_cache_manager). Your instance must match theCacheManagerinterface (seecopinance_os.data.cache). The cache is used for tool outputs and for rendered agent prompts.
You can also set COPINANCEOS_CACHE_ENABLED=false (see Configuration).
Best Practices
- Follow interfaces: Implement all required methods
- Handle errors: Raise appropriate domain exceptions
- Type hints: Use proper type annotations
- Documentation: Document your implementation
- Testing: Write tests for your extensions
Available Interfaces
See src/copinance_os/domain/ports/ for all available interfaces:
data_providers.py- Data provider interfacesanalytics.py- Options chain Greeks estimator (OptionsChainGreeksEstimator)analyzers.py- Analyzer interfacesstrategies.py- Strategy interfacesrepositories.py- Repository interfacesstorage.py- Storage and CacheBackend interfacestools.py- Tool, ToolSchema, ToolParameter (for LLM tools)analysis_execution.py- JobRunner (run a job) and AnalysisExecutor (execute analysis) interfaces