Using Copinance OS as a Library
This guide is for developers who want to integrate Copinance OS into their own Python application (web app, script, or service). The project is a pure Python library: no built-in HTTP API or frontend. You get analysis execution, use cases, and data providers; you choose how to expose them.
Multi-turn question-driven conversations (prior user/assistant turns via conversation_history) are library-only: the copinance CLI never loads a transcript—neither on analyze …, nor on root copinance "…" natural-language entry (each of those accepts one question per run).
Table of Contents
- Requirements
- Installation
- Core Concepts
- Quick Start: Run an Analysis
- Configuration
- Running analysis (ResearchOrchestrator)
- Gateway, chat UI, and agent progress
- Using Use Cases Directly
- Library API Reference — container entry points, request/response types, all options
- Analysis Profiles
- Storage and Persistence
- Custom Orchestration
- Error Handling
- Complete Example
- Next Steps
Requirements
- Python 3.11+
- For question-driven (AI) analysis: an LLM provider (Gemini, OpenAI Chat Completions, or local Ollama)
- For macro analysis (optional): FRED API key for higher-quality economic data
- For SEC filing tools in question-driven analysis (optional): set
EDGAR_IDENTITYorCOPINANCEOS_EDGAR_IDENTITY(name + email, SEC requirement); see Configuration — SEC EDGAR
Installation
Install Copinance OS as a dependency in your project.
From source:
pip install -e /path/to/copinance-os
# or from the repo root:
pip install -e .Optional: local LLM (Ollama):
pip install -e ".[ollama]"Depend on a path or git URL in your requirements.txt or pyproject.toml:
# requirements.txt (git URL)
git+https://github.com/copinance/copinance-os.git
# pyproject.toml (editable local path)
[tool.uv.sources]
copinance-os = { path = "../copinance-os", editable = true }Core Concepts
| Concept | Description |
|---|---|
| Container | Dependency-injection container. You get it via get_container(...) and use it to resolve use cases, ResearchOrchestrator, and data providers. |
| Job | A single run request: scope (instrument vs market), symbol/index, timeframe, and execution_type. Not persisted. |
| ResearchOrchestrator | Preferred entry for running a Job: wraps JobRunner and keeps orchestration consistent with analyze use cases. |
| JobRunner | Port used under the hood to dispatch jobs to executors; override for queues, retries, or custom routing. |
| Analysis modes | Deterministic (instrument or market) and question-driven. The job’s internal type determines which pipeline runs. |
| Use cases | Fine-grained operations (search instruments, get quote, create profile, etc.). Use them when you need more control than a full analysis run. |
Quick Start: Run an Analysis
- Configure the container — pass
LLMConfigonly if you need question-driven analysis; for deterministic instrument/market runs you can useget_container(llm_config=None, load_from_env=False)so no LLM is wired. - Get
ResearchOrchestratorfrom the container (recommended entry point for jobs). - Build a
Joband run it withawait orchestrator.run_job(job, {}).
import asyncio
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.infra.di import get_container
from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
async def main():
# Deterministic-only: no LLM (question-driven routes will not be available)
# container = get_container(llm_config=None, load_from_env=False)
# Question-driven-capable: supply Gemini, OpenAI, or Ollama (see LLMConfig section)
llm_config = LLMConfig(
provider="gemini",
api_key="your-gemini-api-key",
model="gemini-1.5-pro",
)
container = get_container(llm_config=llm_config)
orchestrator = container.research_orchestrator()
job = Job(
scope=JobScope.INSTRUMENT,
market_type=MarketType.EQUITY,
instrument_symbol="AAPL",
timeframe=JobTimeframe.MID_TERM,
execution_type="deterministic_instrument_analysis",
)
result = await orchestrator.run_job(job, {})
if result.success:
print(result.results)
if result.report:
print(result.report.summary)
else:
print("Error:", result.error_message)
asyncio.run(main())Configuration
Container: get_container()
Get the global container with optional overrides:
from copinance_os.infra.di import get_container
from copinance_os.ai.llm.config import LLMConfig
container = get_container(
llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro"),
fred_api_key="your-fred-api-key", # optional; for macro data
load_from_env=True, # default: try env for LLM if llm_config is None
prompt_templates=None, # optional; see Prompt templates below
prompt_manager=None, # optional; custom PromptManager instance
cache_enabled=None, # optional; True/False or None (use settings)
cache_manager=None, # optional; use your own CacheManager
storage_type=None, # optional; "file" | "memory" to override settings
storage_path=None, # optional; root path for file storage
)- Library usage: Pass
llm_configexplicitly. Environment variables are used by the CLI; for your app you should provide config in code (or your own config layer). fred_api_key: Optional. If provided, macro analysis can use FRED for better economic data.load_from_env: IfTrueandllm_configisNone, the container will try to load LLM settings from env (e.g.COPINANCEOS_GEMINI_API_KEY). Prefer passingllm_configin library code.prompt_templates/prompt_manager: Optional. See Prompt templates below.cache_enabled/cache_manager: Optional. See Cache below.storage_type/storage_path: Optional. See Storage and Persistence below. Usestorage_type="memory"to avoid creating a.copinancedirectory on disk.
Performance: The container is created only on first use (lazy proxy), and use cases and providers are singletons created when first requested. See Architecture — Container and performance for details.
LLMConfig (required for question-driven analysis)
For question-driven analysis you must supply LLMConfig when creating the container.
Implemented providers: LLMProviderFactory supports gemini, ollama, and openai (requires the openai Python package; it is a core dependency of this project). Unknown provider names raise ValueError (there is no anthropic adapter in-tree yet).
from copinance_os.ai.llm.config import LLMConfig
llm_config = LLMConfig(
provider="gemini", # "gemini" | "ollama" | "openai"
api_key="your-api-key", # required for Gemini and OpenAI; optional for Ollama
model="gemini-1.5-pro", # optional; provider default if omitted
temperature=0.7,
max_tokens=4096,
base_url=None, # Ollama base URL, or optional OpenAI-compatible API base
text_streaming_mode="auto", # "auto" | "native" | "buffered" — see LLM text streaming below
execution_type_providers={}, # e.g. {"question_driven_analysis": "gemini"}
provider_config={}, # e.g. {"disable_native_text_stream": True}
)See Configuration for provider-specific details and environment variables used by the CLI.
LLM text streaming (library / LLMProvider)
For plain text generation (no tool-calling loop), each backend implements LLMProvider.generate_text_stream, which yields structured LLMTextStreamEvent objects (kind: text_delta, done, or error). Import from copinance_os.ai.llm or copinance_os.ai.llm.streaming.
stream_mode / text_streaming_mode | Behavior |
|---|---|
auto (default) | Use native HTTP/API streaming when the provider supports it (Gemini, OpenAI, and Ollama do); if native streaming fails, fall back to a single generate_text call and emit one delta. |
native | Require native streaming; raises if disabled or unsupported, or emits an error event if the stream fails after starting. |
buffered | Always call generate_text once, then emit one text_delta and done (works for any model; native_streaming=False on events). |
- Default mode on the provider comes from
LLMConfig.text_streaming_mode. You can override per call:await provider.generate_text_stream("…", stream_mode="buffered"). - Disable native streaming (e.g. model or proxy quirks): set
provider_config={"disable_native_text_stream": True}inLLMConfig, or constructGeminiProvider/OpenAIProvider/OllamaProviderwithdisable_native_text_stream=True. Inautomode, the provider then uses the buffered path only. - Question-driven analysis uses
generate_with_toolswith optionalstream=Trueandon_stream_event(the CLI setsstreamvia job context and uses a stdout handler). Results includeanalysis_streamedwhen streaming was enabled. You can still callgenerate_text_streamon theLLMProviderdirectly for plain-text-only use cases.
from copinance_os.ai.llm.providers.factory import LLMProviderFactory
from copinance_os.ai.llm.config import LLMConfig
provider = LLMProviderFactory.create_provider(
"gemini",
LLMConfig(provider="gemini", api_key="...", text_streaming_mode="auto"),
)
async for event in provider.generate_text_stream("Briefly explain beta."):
if event.kind == "text_delta":
print(event.text_delta, end="", flush=True)
elif event.kind == "error":
print("Error:", event.error_message)Cache
The built-in cache stores tool results (e.g. quotes, fundamentals), EDGAR/edgartools responses (filing lists and filing content), and rendered agent prompts under the configured storage path. Library users can disable it or supply their own cache.
- Default: Cache is enabled (see also
COPINANCEOS_CACHE_ENABLEDin Configuration). - Disable cache: Pass
cache_enabled=Falsetoget_container(). No tool or prompt caching is used; every request hits the providers. - Use your own cache: Pass
cache_manager=my_cache_managertoget_container(). Your instance must implement the same interface asCacheManager(fromcopinance_os.data.cache). When provided,cache_enabledis ignored.
from copinance_os.infra.di import get_container
# Disable cache (e.g. for real-time data only)
container = get_container(llm_config=llm_config, cache_enabled=False)
# Or inject your own CacheManager (e.g. Redis-backed)
container = get_container(llm_config=llm_config, cache_manager=my_cache_manager)Prompt templates
Question-driven (and other LLM) analysis uses prompt templates to build system and user prompts. By default, Copinance OS uses built-in templates from the package. As a library user you can inject your own so that your app controls wording, tone, and placeholders.
- If you pass nothing: The default
PromptManageris used and all prompts come from package defaults. - If you pass
prompt_templates: A dict overlay is used. Keys are prompt names (e.g.analyze_question_driven), values follow the template format below. Only the names you provide are overridden; any other name falls back to the built-in default. - If you pass
prompt_manager: Your ownPromptManagerinstance is used for all prompt resolution (e.g.PromptManager(templates=...)orPromptManager(resources_dir=Path("my_prompts"))).
Prompt template format
Each template is an object with exactly two string fields:
| Field | Type | Description |
|---|---|---|
system_prompt | string | Instructions and context for the LLM (role, style, constraints). |
user_prompt | string | The task or query sent as the user message; can include tool descriptions and format rules. |
Variable substitution: Use Python-style placeholders {variable_name} in either string. When the template is rendered, every placeholder is replaced by the value passed for that name. To include a literal { or } in the output (e.g. JSON examples in the prompt), escape as {{ and }}.
In code (dict overlay):
{
"system_prompt": "You are a financial assistant. User level: {financial_literacy}.",
"user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}",
}In a file (resources_dir): One JSON file per prompt name, e.g. analyze_question_driven.json:
{
"system_prompt": "You are a financial assistant. User level: {financial_literacy}.",
"user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}"
}Question-driven prompt variables: The built-in question-driven analyze flow uses the prompt name analyze_question_driven and supplies these variables when rendering; your custom template must use the same names if you override it:
| Variable | Description |
|---|---|
question | The user’s question for this call (may be prefixed with symbol context by the executor). Prior Q&A for multi-turn sessions are not pasted into this template; they are supplied separately via prior_conversation to each LLM provider. |
tools_description | Text describing available tools. |
tool_examples | Example tool-call snippets. |
financial_literacy | "beginner", "intermediate", or "advanced". |
current_date | UTC calendar date as YYYY-MM-DD (for relative ranges in prompts). |
You can import the constant for the prompt name: ANALYZE_QUESTION_DRIVEN_PROMPT_NAME from copinance_os.ai.llm.resources.
Example: overlay a single template (question-driven analysis)
from copinance_os.infra.di import get_container
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.ai.llm.resources import ANALYZE_QUESTION_DRIVEN_PROMPT_NAME
prompt_templates = {
ANALYZE_QUESTION_DRIVEN_PROMPT_NAME: {
"system_prompt": "You are a concise financial assistant. User level: {financial_literacy}.",
"user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}\n\nRespond with a JSON tool call only.",
},
}
container = get_container(
llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro"),
prompt_templates=prompt_templates,
)Example: full control with PromptManager
from pathlib import Path
from copinance_os.infra.di import get_container
from copinance_os.ai.llm.resources import PromptManager
# Load custom prompts from a directory (JSON files: analyze_question_driven.json, etc.)
pm = PromptManager(resources_dir=Path("config/prompts"), use_package_data=True)
container = get_container(
llm_config=my_llm_config,
prompt_manager=pm,
)Running analysis (ResearchOrchestrator)
Job model
Build a Job with:
| Field | Description |
|---|---|
scope | JobScope.INSTRUMENT or JobScope.MARKET |
market_type | MarketType.EQUITY or MarketType.OPTIONS (for instrument scope) |
instrument_symbol | e.g. "AAPL" (required when scope=INSTRUMENT) |
market_index | e.g. "SPY" (for scope=MARKET; default "SPY") |
timeframe | JobTimeframe.SHORT_TERM, MID_TERM, or LONG_TERM |
execution_type | "deterministic_instrument_analysis", "deterministic_market_analysis", "question_driven_instrument_analysis", or "question_driven_market_analysis" |
profile_id | Optional UUID; sets financial literacy and preferences for the run |
parameters | Optional dict (reserved; built-in executors use the context dict passed to run() instead) |
Analysis modes and context
Built-in executors read mode-specific options from the context dict (second argument to research_orchestrator.run_job(job, context)), not from job.parameters:
deterministic_instrument_analysis— Instrument scope, deterministic equity/options analysis. Context may includeexpiration_date(single expiry),expiration_dates(list of YYYY-MM-DD strings, merged withexpiration_datewhen both are set), andoption_sidefor options.deterministic_market_analysis— Market scope, deterministic macro regime dashboard. Context includesmarket_index,lookback_days, andinclude_*booleans.question_driven_instrument_analysis— Instrument scope, tool-using analysis; requiresLLMConfig. Context includesquestion, optionalconversation_history(list of{"role","content"}dicts: alternating user/assistant, ending with assistant), optional options context,stream, optionalrun_id(correlation for logs and UI), optionalprogress_sink(see Gateway, chat UI, and agent progress), optionalinclude_agent_progress_timeline(default: true — addsresults["agent_progress_timeline"]for REST/SSE summaries; see developer guide), andinclude_prompt.question_driven_market_analysis— Market scope, tool-using analysis; requiresLLMConfig. Context includesquestion, optionalconversation_history(same shape),market_index,stream, optionalrun_id, optionalprogress_sink, the same optionalinclude_agent_progress_timeline, andinclude_prompt.
Example: deterministic and question-driven jobs
from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
# Equity (no extra context)
job_equity = Job(
scope=JobScope.INSTRUMENT,
market_type=MarketType.EQUITY,
instrument_symbol="AAPL",
timeframe=JobTimeframe.MID_TERM,
execution_type="deterministic_instrument_analysis",
)
# Options — pass expiration and side in context
job_options = Job(
scope=JobScope.INSTRUMENT,
market_type=MarketType.OPTIONS,
instrument_symbol="AAPL",
timeframe=JobTimeframe.MID_TERM,
execution_type="deterministic_instrument_analysis",
)
context_options = {"expiration_date": "2026-06-19", "option_side": "all"}
# Or multiple expiries (deterministic options analysis fetches each; results may include `multi_expiration`)
# context_options = {"expiration_dates": ["2026-06-19", "2026-09-18"], "option_side": "all"}
# Macro — pass market_index and lookback in context
job_macro = Job(
scope=JobScope.MARKET,
market_index="SPY",
timeframe=JobTimeframe.MID_TERM,
execution_type="deterministic_market_analysis",
)
context_macro = {"market_index": "SPY", "lookback_days": 180}
# Question-driven analyze — pass question in context (required)
job_agent = Job(
scope=JobScope.INSTRUMENT,
market_type=MarketType.EQUITY,
instrument_symbol="AAPL",
timeframe=JobTimeframe.MID_TERM,
execution_type="question_driven_instrument_analysis",
)
context_agent = {"question": "What are the key risks?"}
# Optional follow-up: pass prior turns as dicts (see multi-turn section below)
# context_followup = {"question": "What mitigations does the 10-K mention?", "conversation_history": [...]}Run and read result
orchestrator = container.research_orchestrator()
# Pass context for options, macro, or question-driven analysis; use {} for equity
result = await orchestrator.run_job(job_equity, {})
# result = await orchestrator.run_job(job_options, context_options)
# result = await orchestrator.run_job(job_agent, context_agent)
# result is RunJobResult
# result.success: bool
# result.results: dict | None — analysis output (question-driven success may include "conversation_turns" for multi-turn chaining)
# result.error_message: str | None
# result.report: AnalysisReport | None — summary, key_metrics, methodology, assumptions, limitations
# result.report_exclusion_reason: ReportExclusionReason | None — set if no report envelope exists for the executor typeAll analysis execution is async; use asyncio.run() or run inside an async framework.
Gateway, chat UI, and agent progress
Copinance OS is a library, not an HTTP server. If you expose analysis to multiple users (e.g. a chat UI), put authentication, tenancy, rate limits, and LLM API keys in your own gateway or BFF—not inside domain/.
| Concern | Recommended approach |
|---|---|
| Identity | Validate JWT/OAuth/session in your gateway; map users to profile_id (or pass None). |
| Isolation | Scope storage/cache keys per tenant or user; do not share mutable request state across users. |
| LLM keys | Configure LLMConfig / env per deployment or tenant; never embed keys in client payloads. |
| Rate limiting | Enforce before ResearchOrchestrator.run_job. |
| Transport | Use your framework’s HTTP/SSE/WebSocket; serialize progress from a ProgressSink (see copinance_os.domain.models.agent_progress). |
| Cancellation | Cancel the asyncio task when the client disconnects. |
| Secrets | Do not log full tool args, prompts, or PII; progress events are summarised but still sensitive. |
Correlation: pass optional run_id on AnalyzeInstrumentRequest / AnalyzeMarketRequest, or in run_job(..., context), so logs and UI match your chat session. When run_id is set, DefaultJobRunner binds run_id and job_execution_type into structlog context for the run.
Structured progress: implement ProgressSink (copinance_os.domain.ports.progress) and pass progress_sink in the job context (same place as stream for token streaming). Example queue-backed sink for bridging to your transport:
import asyncio
from copinance_os.domain.models.agent_progress import AgentProgressEvent
class QueueProgressSink:
def __init__(self, queue: asyncio.Queue[AgentProgressEvent]) -> None:
self._queue = queue
async def emit(self, event: AgentProgressEvent) -> None:
await self._queue.put(event)Wire QueueProgressSink() as context["progress_sink"] with stream=True for question-driven runs. Use LLMProvider via the container—do not import vendor LLM SDKs from application/gateway code.
A fuller client checklist (event kinds, rollback, versioning, cancellation) is in the developer guide: Agent progress & chat integration (clients).
Using Use Cases Directly
When you need a single operation (e.g. search, quote, profile) instead of a full analysis run, use the use cases from the container. Each is a factory: call container.<use_case>() to get an instance, then await use_case.execute(request).
Available use cases
| Use case | Method | Purpose |
|---|---|---|
| Search instruments | search_instruments_use_case() | Search by name or symbol |
| Get instrument | get_instrument_use_case() | Get cached instrument by symbol |
| Get quote | get_quote_use_case() | Current market quote for a symbol |
| Get historical data | get_historical_data_use_case() | OHLCV history for symbol and date range |
| Get options chain | get_options_chain_use_case() | Options chain for an underlying symbol |
| Get fundamentals | get_stock_fundamentals_use_case() | Fundamentals for a symbol (financials, ratios) |
| Analyze instrument | analyze_instrument_use_case() | Progressive instrument analysis (deterministic or question-driven) |
| Analyze market | analyze_market_use_case() | Progressive market analysis (deterministic or question-driven) |
| Create profile | create_profile_use_case() | Create analysis profile |
| Get / list profiles | get_profile_use_case(), list_profiles_use_case() | Read profiles |
| Current profile | get_current_profile_use_case(), set_current_profile_use_case() | Get/set current profile |
| Delete profile | delete_profile_use_case() | Delete a profile |
Example: search, quote, historical data, options, fundamentals
from datetime import datetime
from copinance_os.infra.di import get_container
from copinance_os.research.workflows.market import (
SearchInstrumentsRequest,
GetQuoteRequest,
GetHistoricalDataRequest,
GetOptionsChainRequest,
)
from copinance_os.research.workflows.fundamentals import GetStockFundamentalsRequest
container = get_container(llm_config=my_llm_config)
# Search by name or symbol
search_uc = container.search_instruments_use_case()
search_res = await search_uc.execute(SearchInstrumentsRequest(query="Apple", limit=5))
for s in search_res.instruments:
print(s.symbol, s.name)
# Current quote
quote_uc = container.get_quote_use_case()
quote_res = await quote_uc.execute(GetQuoteRequest(symbol="AAPL"))
# quote_res.quote: dict with current_price, volume, etc.
# Historical OHLCV
hist_uc = container.get_historical_data_use_case()
hist_res = await hist_uc.execute(
GetHistoricalDataRequest(
symbol="AAPL",
start_date=datetime(2024, 1, 1),
end_date=datetime(2024, 12, 31),
interval="1d",
)
)
# hist_res.data: list of MarketDataPoint
# Options chain
options_uc = container.get_options_chain_use_case()
options_res = await options_uc.execute(
GetOptionsChainRequest(underlying_symbol="AAPL", expiration_date=None)
)
# options_res.chain: OptionsChain (calls, puts, underlying_price, etc.)
# Fundamentals (financial statements, ratios)
fundamentals_uc = container.get_stock_fundamentals_use_case()
fund_res = await fundamentals_uc.execute(
GetStockFundamentalsRequest(symbol="AAPL", periods=5, period_type="annual")
)
# fund_res.fundamentals: StockFundamentals (income_statements, balance_sheets, ratios, etc.)Example: progressive analyze
from copinance_os.research.workflows.analyze import (
AnalyzeInstrumentRequest,
AnalyzeMarketRequest,
)
from copinance_os.domain.models.market import MarketType
# Deterministic equity analysis
instrument_uc = container.analyze_instrument_use_case()
equity_res = await instrument_uc.execute(
AnalyzeInstrumentRequest(symbol="AAPL")
)
# Question-driven options analysis
options_res = await instrument_uc.execute(
AnalyzeInstrumentRequest(
symbol="AAPL",
market_type=MarketType.OPTIONS,
expiration_date="2026-06-19",
question="What does skew imply about sentiment?",
)
# Multiple expiries: use expiration_dates (optional expiration_date is merged)
# AnalyzeInstrumentRequest(
# symbol="AAPL",
# market_type=MarketType.OPTIONS,
# expiration_dates=["2026-06-19", "2026-09-18"],
# question="How does IV compare across these expiries?",
# )
)
# Deterministic macro / market regime analysis
market_uc = container.analyze_market_use_case()
macro_res = await market_uc.execute(
AnalyzeMarketRequest(market_index="SPY", lookback_days=90)
)
# Question-driven market analysis
market_question_res = await market_uc.execute(
AnalyzeMarketRequest(
market_index="SPY",
question="Is this a risk-on or risk-off regime?",
)
)Multi-turn question-driven analysis
This section is Python-only; the CLI cannot supply conversation_history (see the note at the top of this page). Use this when the user’s new question should see prior user/assistant messages from an earlier run (same symbol / market scope). Each LLM provider maps history to its native chat API (Gemini Content list + systemInstruction; OpenAI Chat Completions messages; Ollama /api/chat messages).
- Set
conversation_historyto a list ofLLMConversationTurnthat alternatesuser→assistant→ … and ends withassistant. - Set
questionto the new user message only (do not duplicate the latest user turn insideconversation_history). - After a successful question-driven run,
results["conversation_turns"]may contain the full transcript (including the latest assistant reply) as serializable dicts — reuse them to build the next request’sconversation_history.
conversation_history is invalid in mode=deterministic.
import asyncio
from copinance_os.domain.models.llm_conversation import LLMConversationTurn
from copinance_os.infra.di import get_container
from copinance_os.research.workflows.analyze import (
AnalyzeInstrumentRequest,
AnalyzeMode,
)
async def follow_up_example() -> None:
container = get_container(load_from_env=True)
uc = container.analyze_instrument_use_case()
first = await uc.execute(
AnalyzeInstrumentRequest(
symbol="AAPL",
question="What was the most recent closing price?",
mode=AnalyzeMode.QUESTION_DRIVEN,
)
)
if not first.success or not first.results:
return
turns = first.results.get("conversation_turns") or []
history = [LLMConversationTurn.model_validate(t) for t in turns]
second = await uc.execute(
AnalyzeInstrumentRequest(
symbol="AAPL",
question="How does that compare to the 52-week high?",
mode=AnalyzeMode.QUESTION_DRIVEN,
conversation_history=history,
)
)
_ = second # use second.results, second.report, etc.
asyncio.run(follow_up_example())Library API Reference
Use this section as a single reference for every option available when using Copinance OS as a library. All request/response types and container entry points are listed with correct module paths. Python examples on this page use the imports and types named in the tables in this section; they have been checked against the current package layout—if a snippet fails, compare your version to the cited modules (domain.models.*, research.workflows.*, infra.di, ai.llm).
Container entry points
Get the container with get_container(...) from copinance_os.infra.di. You can pass prompt_templates or prompt_manager for custom prompts (see Prompt templates), cache_enabled or cache_manager for cache control (see Cache), and storage_type or storage_path to avoid or customize disk usage (see Storage and Persistence). Then call these methods to obtain use cases, ResearchOrchestrator, analyze runners, and (advanced) analysis_executors:
| Entry point | Method | Returns | Purpose |
|---|---|---|---|
| Market | container.search_instruments_use_case() | SearchInstrumentsUseCase | Search by name or symbol |
container.get_instrument_use_case() | GetInstrumentUseCase | Get cached instrument by symbol | |
container.get_quote_use_case() | GetQuoteUseCase | Current quote for a symbol | |
container.get_historical_data_use_case() | GetHistoricalDataUseCase | OHLCV history for symbol/date range | |
container.get_options_chain_use_case() | GetOptionsChainUseCase | Options chain for underlying | |
container.get_stock_fundamentals_use_case() | GetStockFundamentalsUseCase | Fundamentals for a symbol | |
| Analyze | container.analyze_instrument_use_case() | AnalyzeInstrumentUseCase | Progressive instrument analysis |
container.analyze_market_use_case() | AnalyzeMarketUseCase | Progressive market analysis | |
| Runners (override these) | container.analyze_instrument_runner() | AnalyzeInstrumentRunner impl | Default: builds Job, calls ResearchOrchestrator.run_job |
container.analyze_market_runner() | AnalyzeMarketRunner impl | Default: builds Job, calls ResearchOrchestrator.run_job | |
| Orchestration | container.research_orchestrator() | ResearchOrchestrator | Preferred: run_job(job, context) (wraps JobRunner internally) |
container.analysis_executors() | list[AnalysisExecutor] | Registered analysis executors (advanced: validate/execute yourself) | |
| Profiles | container.create_profile_use_case() | CreateProfileUseCase | Create analysis profile |
container.get_profile_use_case() | GetProfileUseCase | Get profile by ID | |
container.list_profiles_use_case() | ListProfilesUseCase | List profiles (paginated) | |
container.get_current_profile_use_case() | GetCurrentProfileUseCase | Get current profile | |
container.set_current_profile_use_case() | SetCurrentProfileUseCase | Set current profile | |
container.delete_profile_use_case() | DeleteProfileUseCase | Delete profile | |
| Infrastructure | container.market_data_provider() | MarketDataProvider | Market data (quote, history, search, options) |
container.cache_manager() | CacheManager | Cache for tool/CLI data |
The JobRunner port is not exposed as a container method; customize orchestration via ResearchOrchestrator, executor lists, or a custom Container (see Extending).
Imports: Container from copinance_os.infra.di; request/response types from the modules below.
Market use cases — copinance_os.research.workflows.market
DTOs for quote/history/options/instrument are defined in copinance_os.domain.models.market_requests and re-exported from this workflow module for convenience.
| Use case | Request | Response | Request fields (all optional except noted) |
|---|---|---|---|
| Search instruments | SearchInstrumentsRequest | SearchInstrumentsResponse | query (required), limit (default 10, 1–100), search_mode (InstrumentSearchMode: auto / symbol / general) |
| Get instrument | GetInstrumentRequest | GetInstrumentResponse | symbol (required). Response: instrument: Stock | None |
| Get quote | GetQuoteRequest | GetQuoteResponse | symbol (required). Response: quote: dict, symbol: str |
| Get historical data | GetHistoricalDataRequest | GetHistoricalDataResponse | symbol, start_date, end_date (required; datetime), interval (default "1d"). Response: data: list[MarketDataPoint], symbol |
| Get options chain | GetOptionsChainRequest | GetOptionsChainResponse | underlying_symbol (required), expiration_date: str | None (YYYY-MM-DD). Response: chain: OptionsChain, underlying_symbol. With the default container, each OptionContract may include greeks (BSM estimates via QuantLib) when spot and implied vol are available; see Options & Greeks. |
Enum: InstrumentSearchMode in the same module (auto, symbol, general).
Analyze use cases — copinance_os.research.workflows.analyze
Request and mode types are defined in copinance_os.domain.models.analysis (AnalyzeInstrumentRequest, AnalyzeMarketRequest, AnalyzeMode, routing helpers). research.workflows.analyze re-exports them next to AnalyzeInstrumentUseCase / AnalyzeMarketUseCase so a single import path still works.
| Use case | Request | Response | Request fields |
|---|---|---|---|
| Analyze instrument | AnalyzeInstrumentRequest | RunJobResult | symbol (required), market_type (MarketType.EQUITY / OPTIONS, default EQUITY), timeframe (optional; defaults by market type), question, mode (auto / deterministic / question_driven), conversation_history (optional; multi-turn question-driven only), expiration_date (optional YYYY-MM-DD), expiration_dates (optional list of YYYY-MM-DD; merged with expiration_date), option_side, profile_id, include_prompt_in_results, stream |
| Analyze market | AnalyzeMarketRequest | RunJobResult | market_index (default "SPY"), timeframe (default MID_TERM), question, mode (auto / deterministic / question_driven), conversation_history (optional; multi-turn question-driven only), lookback_days (default 252, 1–2520), include_vix, include_market_breadth, include_sector_rotation, include_rates, include_credit, include_commodities, include_labor, include_housing, include_manufacturing, include_consumer, include_global, include_advanced, profile_id, include_prompt_in_results, stream |
Ports for custom runners: AnalyzeInstrumentRunner and AnalyzeMarketRunner in copinance_os.domain.ports.analysis_execution — each: async def run(self, request: ...Request) -> RunJobResult.
Profile use cases — copinance_os.research.workflows.profile
| Use case | Request | Response | Request fields |
|---|---|---|---|
| Create profile | CreateProfileRequest | CreateProfileResponse | financial_literacy (FinancialLiteracy.BEGINNER / INTERMEDIATE / ADVANCED), display_name, preferences: dict[str, str] |
| Get profile | GetProfileRequest | GetProfileResponse | profile_id: UUID |
| List profiles | ListProfilesRequest | ListProfilesResponse | limit (default 100), offset (default 0) |
| Get current profile | GetCurrentProfileRequest | GetCurrentProfileResponse | (no fields) |
| Set current profile | SetCurrentProfileRequest | SetCurrentProfileResponse | profile_id: UUID | None (None clears current) |
| Delete profile | DeleteProfileRequest | DeleteProfileResponse | profile_id: UUID |
Type: FinancialLiteracy from copinance_os.domain.models.profile.
Fundamentals use case — copinance_os.research.workflows.fundamentals
| Use case | Request | Response | Request fields |
|---|---|---|---|
| Get fundamentals | GetStockFundamentalsRequest | GetStockFundamentalsResponse | symbol (required), periods (default 5), period_type ("annual" or "quarterly", default "annual"). Response: fundamentals: StockFundamentals |
Domain types (jobs and results)
| Type | Module | Use |
|---|---|---|
Job | copinance_os.domain.models.job | Analysis execution context: scope, market_type, instrument_symbol, market_index, timeframe, execution_type, profile_id |
RunJobResult | copinance_os.domain.models.job | success, results, error_message, report (AnalysisReport | None), report_exclusion_reason (ReportExclusionReason | None) |
AnalysisReport | copinance_os.domain.models.analysis_report | summary, key_metrics, methodology, assumptions, limitations |
ReportExclusionReason | copinance_os.domain.models.job | e.g. UNKNOWN_EXECUTOR_TYPE when results exist but no report builder is registered |
JobScope | copinance_os.domain.models.job | INSTRUMENT, MARKET |
JobTimeframe | copinance_os.domain.models.job | SHORT_TERM, MID_TERM, LONG_TERM |
MarketType | copinance_os.domain.models.market | EQUITY, OPTIONS |
OptionSide | copinance_os.domain.models.market | CALL, PUT, ALL |
LLMConversationTurn | copinance_os.domain.models.llm_conversation | role (user / assistant), content; used for conversation_history on analyze requests |
MarketDataPoint | copinance_os.domain.models.market | OHLCV + symbol, timestamp |
OptionsChain | copinance_os.domain.models.market | underlying_symbol, expiration_date, calls, puts, underlying_price, etc. |
Stock | copinance_os.domain.models.stock | Instrument entity (symbol, name, exchange, sector, …) |
Override points
- Custom analyze execution: Implement
AnalyzeInstrumentRunnerorAnalyzeMarketRunnerand overridecontainer.analyze_instrument_runnerorcontainer.analyze_market_runner(see Custom executors and runners). - Custom job orchestration: Implement the
JobRunnerport (copinance_os.domain.ports.analysis_execution) and wire it when building aResearchOrchestrator, or use a customContainer/set_container()so your app owns the wiring. The stock container does not exposejob_runner()as a getter.
Using analyze without jobs
You use analyze only through the use cases above when you want a clean library API; you typically do not build Job objects yourself. The framework builds jobs internally (via default runners) when needed. If you run jobs manually, prefer ResearchOrchestrator.run_job over calling JobRunner directly unless you are replacing the port.
Custom executors and runners
By default, analyze uses runners that build a Job and call ResearchOrchestrator (which delegates to JobRunner and the built-in analysis executors). You can replace this with your own runner implementation or run analysis without the job abstraction:
-
Implement the runner port for the flow you want to customize:
- Instrument analysis:
AnalyzeInstrumentRunnerincopinance_os.domain.ports.analysis_execution— implementasync def run(self, request: AnalyzeInstrumentRequest) -> RunJobResult. - Market analysis:
AnalyzeMarketRunnerin the same module — implementasync def run(self, request: AnalyzeMarketRequest) -> RunJobResult.
- Instrument analysis:
-
Override the runner in the container so the use case uses your implementation:
from dependency_injector import providers from copinance_os.domain.ports.analysis_execution import AnalyzeInstrumentRunner from copinance_os.domain.models.analysis import AnalyzeInstrumentRequest from copinance_os.domain.models.job import RunJobResult from copinance_os.infra.di import get_container class MyAnalyzeInstrumentRunner(AnalyzeInstrumentRunner): async def run(self, request: AnalyzeInstrumentRequest) -> RunJobResult: # Your executor or direct implementation; no Job return RunJobResult(success=True, results={"analysis": "..."}, error_message=None) container = get_container(llm_config=my_llm_config) container.analyze_instrument_runner.override(providers.Factory(MyAnalyzeInstrumentRunner)) # analyze_instrument_use_case() will now use MyAnalyzeInstrumentRunner -
Job-based execution: For queues or batch, implement
JobRunnerand inject it into aResearchOrchestratoryou construct yourself, or replace the orchestrator/runners on a custom container.
Imports: Market request/response DTOs also live in copinance_os.domain.models.market_requests (re-exported from research.workflows.market). Analyze requests: domain.models.analysis (re-exported from research.workflows.analyze).
Analysis Profiles
Profiles store financial literacy level and preferences; they are not user accounts. Your app owns users and auth; you map users to profile IDs if you want.
- Create a profile: use
create_profile_use_case()with literacy (e.g.beginner,intermediate,advanced) and optional name. - Attach to a job: set
job.profile_idto the profile UUID. The runner will pass literacy and preferences into the analysis context. - Current profile: the container has a “current profile” (used by CLI). In a library you can ignore it and always set
profile_idon the job, or useset_current_profile_use_case()and leaveprofile_idunset to use current.
Storage and Persistence
The .copinance directory (e.g. data/, equities.json) is created by the storage layer used by repositories (instrument lists, profiles, etc.), not by the cache. Cache (when enabled) also uses the same storage path for tool and prompt cache files.
Default behavior: Storage is file-based (settings COPINANCEOS_STORAGE_TYPE=file, COPINANCEOS_STORAGE_PATH=.copinance), so the CLI and any code using the default container will create .copinance on disk unless overridden.
To avoid creating .copinance when integrating the library (e.g. in a web backend where you want no on-disk usage):
-
In code: Pass
storage_type="memory"toget_container(). Repositories then use in-memory storage and no directory is created. You can also passcache_enabled=Falseif you want every request to use fresh data.container = get_container( llm_config=llm_config, load_from_env=False, cache_enabled=False, storage_type="memory", ) -
Via environment: Set
COPINANCEOS_STORAGE_TYPE=memoryso the container uses in-memory storage without changing code.
For a long-running app you may want file persistence: keep the default (or set storage_type="file" and optionally storage_path). In-memory storage loses data when the process exits.
The library does not impose a database. You can implement the repository and storage ports (see Developer Guide) and wire them into the container if you need PostgreSQL, MongoDB, etc.
Custom Orchestration
The default JobRunner (used inside ResearchOrchestrator.run_job) finds the executor for the job’s execution_type, builds context (e.g. profile), and runs the executor once. You can:
- Replace job orchestration: implement the
JobRunnerport (async def run(job, context) -> RunJobResult) and construct aResearchOrchestrator(or custom façade) with it—there is nocontainer.job_runner()accessor on the stockContainer. - Use analysis executors yourself: resolve
container.analysis_executors()and callvalidate(job)/execute(job, context)on the executor that validates.
See Architecture and Extending for ports and extension points.
Error Handling
RunJobResult.success == True: Inspectresult.resultsand optionalresult.report(AnalysisReport) for the structured envelope. Ifresult.reportisNonebut you expected one, checkresult.report_exclusion_reason(e.g. unknown executor type without a registered report builder).RunJobResult.success == False: Checkresult.error_message. The analysis or an underlying service failed.ExecutorNotFoundError: No executor registered for the job’s execution type. Ensure the container includes the right executors (e.g. LLM config for question-driven analysis).DomainError: Base type for domain-level errors (e.g. validation, business rule).
Catch these in your app and map them to your HTTP codes or user messages as needed.
Complete Example
import asyncio
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.infra.di import get_container
from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
async def main():
llm_config = LLMConfig(
provider="gemini",
api_key="your-api-key",
model="gemini-1.5-pro",
)
container = get_container(
llm_config=llm_config,
fred_api_key="your-fred-key", # optional
)
orchestrator = container.research_orchestrator()
# Equity analysis
job = Job(
scope=JobScope.INSTRUMENT,
market_type=MarketType.EQUITY,
instrument_symbol="AAPL",
timeframe=JobTimeframe.MID_TERM,
execution_type="deterministic_instrument_analysis",
)
result = await orchestrator.run_job(job, {})
if result.success:
print("Equity result keys:", result.results.keys())
else:
print("Error:", result.error_message)
# Optional: use cases for finer control
from copinance_os.research.workflows.market import SearchInstrumentsRequest
search_uc = container.search_instruments_use_case()
search_res = await search_uc.execute(SearchInstrumentsRequest(query="Tesla", limit=5))
# Use search_res.instruments...
asyncio.run(main())Next Steps
- Configuration — LLM and FRED setup, env vars, security.
- User Guide — CLI — Full Typer reference, including
--jsonfor machine-readable output. - API Reference — Data provider interfaces and market point coercion helpers.
- Architecture — Hexagonal layout, ports, container, and annotated package tree.
- Extending — Custom data providers, executors, and tests.