Getting StartedUsing as a Library

Using Copinance OS as a Library

This guide is for developers who want to integrate Copinance OS into their own Python application (web app, script, or service). The project is a pure Python library: no built-in HTTP API or frontend. You get analysis execution, use cases, and data providers; you choose how to expose them.

Multi-turn question-driven conversations (prior user/assistant turns via conversation_history) are library-only: the copinance CLI never loads a transcript—neither on analyze …, nor on root copinance "…" natural-language entry (each of those accepts one question per run).

Table of Contents


Requirements

  • Python 3.11+
  • For question-driven (AI) analysis: an LLM provider (Gemini, OpenAI Chat Completions, or local Ollama)
  • For macro analysis (optional): FRED API key for higher-quality economic data
  • For SEC filing tools in question-driven analysis (optional): set EDGAR_IDENTITY or COPINANCEOS_EDGAR_IDENTITY (name + email, SEC requirement); see Configuration — SEC EDGAR

Installation

Install Copinance OS as a dependency in your project.

From source:

pip install -e /path/to/copinance-os
# or from the repo root:
pip install -e .

Optional: local LLM (Ollama):

pip install -e ".[ollama]"

Depend on a path or git URL in your requirements.txt or pyproject.toml:

# requirements.txt (git URL)
git+https://github.com/copinance/copinance-os.git

# pyproject.toml (editable local path)
[tool.uv.sources]
copinance-os = { path = "../copinance-os", editable = true }

Core Concepts

ConceptDescription
ContainerDependency-injection container. You get it via get_container(...) and use it to resolve use cases, ResearchOrchestrator, and data providers.
JobA single run request: scope (instrument vs market), symbol/index, timeframe, and execution_type. Not persisted.
ResearchOrchestratorPreferred entry for running a Job: wraps JobRunner and keeps orchestration consistent with analyze use cases.
JobRunnerPort used under the hood to dispatch jobs to executors; override for queues, retries, or custom routing.
Analysis modesDeterministic (instrument or market) and question-driven. The job’s internal type determines which pipeline runs.
Use casesFine-grained operations (search instruments, get quote, create profile, etc.). Use them when you need more control than a full analysis run.

Quick Start: Run an Analysis

  1. Configure the container — pass LLMConfig only if you need question-driven analysis; for deterministic instrument/market runs you can use get_container(llm_config=None, load_from_env=False) so no LLM is wired.
  2. Get ResearchOrchestrator from the container (recommended entry point for jobs).
  3. Build a Job and run it with await orchestrator.run_job(job, {}).
import asyncio
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.infra.di import get_container
from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
 
async def main():
    # Deterministic-only: no LLM (question-driven routes will not be available)
    # container = get_container(llm_config=None, load_from_env=False)
 
    # Question-driven-capable: supply Gemini, OpenAI, or Ollama (see LLMConfig section)
    llm_config = LLMConfig(
        provider="gemini",
        api_key="your-gemini-api-key",
        model="gemini-1.5-pro",
    )
    container = get_container(llm_config=llm_config)
 
    orchestrator = container.research_orchestrator()
 
    job = Job(
        scope=JobScope.INSTRUMENT,
        market_type=MarketType.EQUITY,
        instrument_symbol="AAPL",
        timeframe=JobTimeframe.MID_TERM,
        execution_type="deterministic_instrument_analysis",
    )
    result = await orchestrator.run_job(job, {})
 
    if result.success:
        print(result.results)
        if result.report:
            print(result.report.summary)
    else:
        print("Error:", result.error_message)
 
asyncio.run(main())

Configuration

Container: get_container()

Get the global container with optional overrides:

from copinance_os.infra.di import get_container
from copinance_os.ai.llm.config import LLMConfig
 
container = get_container(
    llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro"),
    fred_api_key="your-fred-api-key",  # optional; for macro data
    load_from_env=True,                # default: try env for LLM if llm_config is None
    prompt_templates=None,             # optional; see Prompt templates below
    prompt_manager=None,               # optional; custom PromptManager instance
    cache_enabled=None,                # optional; True/False or None (use settings)
    cache_manager=None,                # optional; use your own CacheManager
    storage_type=None,                 # optional; "file" | "memory" to override settings
    storage_path=None,                 # optional; root path for file storage
)
  • Library usage: Pass llm_config explicitly. Environment variables are used by the CLI; for your app you should provide config in code (or your own config layer).
  • fred_api_key: Optional. If provided, macro analysis can use FRED for better economic data.
  • load_from_env: If True and llm_config is None, the container will try to load LLM settings from env (e.g. COPINANCEOS_GEMINI_API_KEY). Prefer passing llm_config in library code.
  • prompt_templates / prompt_manager: Optional. See Prompt templates below.
  • cache_enabled / cache_manager: Optional. See Cache below.
  • storage_type / storage_path: Optional. See Storage and Persistence below. Use storage_type="memory" to avoid creating a .copinance directory on disk.

Performance: The container is created only on first use (lazy proxy), and use cases and providers are singletons created when first requested. See Architecture — Container and performance for details.

LLMConfig (required for question-driven analysis)

For question-driven analysis you must supply LLMConfig when creating the container.

Implemented providers: LLMProviderFactory supports gemini, ollama, and openai (requires the openai Python package; it is a core dependency of this project). Unknown provider names raise ValueError (there is no anthropic adapter in-tree yet).

from copinance_os.ai.llm.config import LLMConfig
 
llm_config = LLMConfig(
    provider="gemini",            # "gemini" | "ollama" | "openai"
    api_key="your-api-key",       # required for Gemini and OpenAI; optional for Ollama
    model="gemini-1.5-pro",       # optional; provider default if omitted
    temperature=0.7,
    max_tokens=4096,
    base_url=None,                # Ollama base URL, or optional OpenAI-compatible API base
    text_streaming_mode="auto",   # "auto" | "native" | "buffered" — see LLM text streaming below
    execution_type_providers={},  # e.g. {"question_driven_analysis": "gemini"}
    provider_config={},           # e.g. {"disable_native_text_stream": True}
)

See Configuration for provider-specific details and environment variables used by the CLI.

LLM text streaming (library / LLMProvider)

For plain text generation (no tool-calling loop), each backend implements LLMProvider.generate_text_stream, which yields structured LLMTextStreamEvent objects (kind: text_delta, done, or error). Import from copinance_os.ai.llm or copinance_os.ai.llm.streaming.

stream_mode / text_streaming_modeBehavior
auto (default)Use native HTTP/API streaming when the provider supports it (Gemini, OpenAI, and Ollama do); if native streaming fails, fall back to a single generate_text call and emit one delta.
nativeRequire native streaming; raises if disabled or unsupported, or emits an error event if the stream fails after starting.
bufferedAlways call generate_text once, then emit one text_delta and done (works for any model; native_streaming=False on events).
  • Default mode on the provider comes from LLMConfig.text_streaming_mode. You can override per call: await provider.generate_text_stream("…", stream_mode="buffered").
  • Disable native streaming (e.g. model or proxy quirks): set provider_config={"disable_native_text_stream": True} in LLMConfig, or construct GeminiProvider / OpenAIProvider / OllamaProvider with disable_native_text_stream=True. In auto mode, the provider then uses the buffered path only.
  • Question-driven analysis uses generate_with_tools with optional stream=True and on_stream_event (the CLI sets stream via job context and uses a stdout handler). Results include analysis_streamed when streaming was enabled. You can still call generate_text_stream on the LLMProvider directly for plain-text-only use cases.
from copinance_os.ai.llm.providers.factory import LLMProviderFactory
from copinance_os.ai.llm.config import LLMConfig
 
provider = LLMProviderFactory.create_provider(
    "gemini",
    LLMConfig(provider="gemini", api_key="...", text_streaming_mode="auto"),
)
async for event in provider.generate_text_stream("Briefly explain beta."):
    if event.kind == "text_delta":
        print(event.text_delta, end="", flush=True)
    elif event.kind == "error":
        print("Error:", event.error_message)

Cache

The built-in cache stores tool results (e.g. quotes, fundamentals), EDGAR/edgartools responses (filing lists and filing content), and rendered agent prompts under the configured storage path. Library users can disable it or supply their own cache.

  • Default: Cache is enabled (see also COPINANCEOS_CACHE_ENABLED in Configuration).
  • Disable cache: Pass cache_enabled=False to get_container(). No tool or prompt caching is used; every request hits the providers.
  • Use your own cache: Pass cache_manager=my_cache_manager to get_container(). Your instance must implement the same interface as CacheManager (from copinance_os.data.cache). When provided, cache_enabled is ignored.
from copinance_os.infra.di import get_container
 
# Disable cache (e.g. for real-time data only)
container = get_container(llm_config=llm_config, cache_enabled=False)
 
# Or inject your own CacheManager (e.g. Redis-backed)
container = get_container(llm_config=llm_config, cache_manager=my_cache_manager)

Prompt templates

Question-driven (and other LLM) analysis uses prompt templates to build system and user prompts. By default, Copinance OS uses built-in templates from the package. As a library user you can inject your own so that your app controls wording, tone, and placeholders.

  • If you pass nothing: The default PromptManager is used and all prompts come from package defaults.
  • If you pass prompt_templates: A dict overlay is used. Keys are prompt names (e.g. analyze_question_driven), values follow the template format below. Only the names you provide are overridden; any other name falls back to the built-in default.
  • If you pass prompt_manager: Your own PromptManager instance is used for all prompt resolution (e.g. PromptManager(templates=...) or PromptManager(resources_dir=Path("my_prompts"))).

Prompt template format

Each template is an object with exactly two string fields:

FieldTypeDescription
system_promptstringInstructions and context for the LLM (role, style, constraints).
user_promptstringThe task or query sent as the user message; can include tool descriptions and format rules.

Variable substitution: Use Python-style placeholders {variable_name} in either string. When the template is rendered, every placeholder is replaced by the value passed for that name. To include a literal { or } in the output (e.g. JSON examples in the prompt), escape as {{ and }}.

In code (dict overlay):

{
    "system_prompt": "You are a financial assistant. User level: {financial_literacy}.",
    "user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}",
}

In a file (resources_dir): One JSON file per prompt name, e.g. analyze_question_driven.json:

{
  "system_prompt": "You are a financial assistant. User level: {financial_literacy}.",
  "user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}"
}

Question-driven prompt variables: The built-in question-driven analyze flow uses the prompt name analyze_question_driven and supplies these variables when rendering; your custom template must use the same names if you override it:

VariableDescription
questionThe user’s question for this call (may be prefixed with symbol context by the executor). Prior Q&A for multi-turn sessions are not pasted into this template; they are supplied separately via prior_conversation to each LLM provider.
tools_descriptionText describing available tools.
tool_examplesExample tool-call snippets.
financial_literacy"beginner", "intermediate", or "advanced".
current_dateUTC calendar date as YYYY-MM-DD (for relative ranges in prompts).

You can import the constant for the prompt name: ANALYZE_QUESTION_DRIVEN_PROMPT_NAME from copinance_os.ai.llm.resources.

Example: overlay a single template (question-driven analysis)

from copinance_os.infra.di import get_container
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.ai.llm.resources import ANALYZE_QUESTION_DRIVEN_PROMPT_NAME
 
prompt_templates = {
    ANALYZE_QUESTION_DRIVEN_PROMPT_NAME: {
        "system_prompt": "You are a concise financial assistant. User level: {financial_literacy}.",
        "user_prompt": "Task: {question}\n\nTools:\n{tools_description}\n\nExamples:\n{tool_examples}\n\nRespond with a JSON tool call only.",
    },
}
 
container = get_container(
    llm_config=LLMConfig(provider="gemini", api_key="...", model="gemini-1.5-pro"),
    prompt_templates=prompt_templates,
)

Example: full control with PromptManager

from pathlib import Path
from copinance_os.infra.di import get_container
from copinance_os.ai.llm.resources import PromptManager
 
# Load custom prompts from a directory (JSON files: analyze_question_driven.json, etc.)
pm = PromptManager(resources_dir=Path("config/prompts"), use_package_data=True)
 
container = get_container(
    llm_config=my_llm_config,
    prompt_manager=pm,
)

Running analysis (ResearchOrchestrator)

Job model

Build a Job with:

FieldDescription
scopeJobScope.INSTRUMENT or JobScope.MARKET
market_typeMarketType.EQUITY or MarketType.OPTIONS (for instrument scope)
instrument_symbole.g. "AAPL" (required when scope=INSTRUMENT)
market_indexe.g. "SPY" (for scope=MARKET; default "SPY")
timeframeJobTimeframe.SHORT_TERM, MID_TERM, or LONG_TERM
execution_type"deterministic_instrument_analysis", "deterministic_market_analysis", "question_driven_instrument_analysis", or "question_driven_market_analysis"
profile_idOptional UUID; sets financial literacy and preferences for the run
parametersOptional dict (reserved; built-in executors use the context dict passed to run() instead)

Analysis modes and context

Built-in executors read mode-specific options from the context dict (second argument to research_orchestrator.run_job(job, context)), not from job.parameters:

  • deterministic_instrument_analysis — Instrument scope, deterministic equity/options analysis. Context may include expiration_date (single expiry), expiration_dates (list of YYYY-MM-DD strings, merged with expiration_date when both are set), and option_side for options.
  • deterministic_market_analysis — Market scope, deterministic macro regime dashboard. Context includes market_index, lookback_days, and include_* booleans.
  • question_driven_instrument_analysis — Instrument scope, tool-using analysis; requires LLMConfig. Context includes question, optional conversation_history (list of {"role","content"} dicts: alternating user/assistant, ending with assistant), optional options context, stream, optional run_id (correlation for logs and UI), optional progress_sink (see Gateway, chat UI, and agent progress), optional include_agent_progress_timeline (default: true — adds results["agent_progress_timeline"] for REST/SSE summaries; see developer guide), and include_prompt.
  • question_driven_market_analysis — Market scope, tool-using analysis; requires LLMConfig. Context includes question, optional conversation_history (same shape), market_index, stream, optional run_id, optional progress_sink, the same optional include_agent_progress_timeline, and include_prompt.

Example: deterministic and question-driven jobs

from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
 
# Equity (no extra context)
job_equity = Job(
    scope=JobScope.INSTRUMENT,
    market_type=MarketType.EQUITY,
    instrument_symbol="AAPL",
    timeframe=JobTimeframe.MID_TERM,
    execution_type="deterministic_instrument_analysis",
)
 
# Options — pass expiration and side in context
job_options = Job(
    scope=JobScope.INSTRUMENT,
    market_type=MarketType.OPTIONS,
    instrument_symbol="AAPL",
    timeframe=JobTimeframe.MID_TERM,
    execution_type="deterministic_instrument_analysis",
)
context_options = {"expiration_date": "2026-06-19", "option_side": "all"}
# Or multiple expiries (deterministic options analysis fetches each; results may include `multi_expiration`)
# context_options = {"expiration_dates": ["2026-06-19", "2026-09-18"], "option_side": "all"}
 
# Macro — pass market_index and lookback in context
job_macro = Job(
    scope=JobScope.MARKET,
    market_index="SPY",
    timeframe=JobTimeframe.MID_TERM,
    execution_type="deterministic_market_analysis",
)
context_macro = {"market_index": "SPY", "lookback_days": 180}
 
# Question-driven analyze — pass question in context (required)
job_agent = Job(
    scope=JobScope.INSTRUMENT,
    market_type=MarketType.EQUITY,
    instrument_symbol="AAPL",
    timeframe=JobTimeframe.MID_TERM,
    execution_type="question_driven_instrument_analysis",
)
context_agent = {"question": "What are the key risks?"}
 
# Optional follow-up: pass prior turns as dicts (see multi-turn section below)
# context_followup = {"question": "What mitigations does the 10-K mention?", "conversation_history": [...]}

Run and read result

orchestrator = container.research_orchestrator()
# Pass context for options, macro, or question-driven analysis; use {} for equity
result = await orchestrator.run_job(job_equity, {})
# result = await orchestrator.run_job(job_options, context_options)
# result = await orchestrator.run_job(job_agent, context_agent)
 
# result is RunJobResult
# result.success: bool
# result.results: dict | None  — analysis output (question-driven success may include "conversation_turns" for multi-turn chaining)
# result.error_message: str | None
# result.report: AnalysisReport | None  — summary, key_metrics, methodology, assumptions, limitations
# result.report_exclusion_reason: ReportExclusionReason | None  — set if no report envelope exists for the executor type

All analysis execution is async; use asyncio.run() or run inside an async framework.


Gateway, chat UI, and agent progress

Copinance OS is a library, not an HTTP server. If you expose analysis to multiple users (e.g. a chat UI), put authentication, tenancy, rate limits, and LLM API keys in your own gateway or BFF—not inside domain/.

ConcernRecommended approach
IdentityValidate JWT/OAuth/session in your gateway; map users to profile_id (or pass None).
IsolationScope storage/cache keys per tenant or user; do not share mutable request state across users.
LLM keysConfigure LLMConfig / env per deployment or tenant; never embed keys in client payloads.
Rate limitingEnforce before ResearchOrchestrator.run_job.
TransportUse your framework’s HTTP/SSE/WebSocket; serialize progress from a ProgressSink (see copinance_os.domain.models.agent_progress).
CancellationCancel the asyncio task when the client disconnects.
SecretsDo not log full tool args, prompts, or PII; progress events are summarised but still sensitive.

Correlation: pass optional run_id on AnalyzeInstrumentRequest / AnalyzeMarketRequest, or in run_job(..., context), so logs and UI match your chat session. When run_id is set, DefaultJobRunner binds run_id and job_execution_type into structlog context for the run.

Structured progress: implement ProgressSink (copinance_os.domain.ports.progress) and pass progress_sink in the job context (same place as stream for token streaming). Example queue-backed sink for bridging to your transport:

import asyncio
 
from copinance_os.domain.models.agent_progress import AgentProgressEvent
 
 
class QueueProgressSink:
    def __init__(self, queue: asyncio.Queue[AgentProgressEvent]) -> None:
        self._queue = queue
 
    async def emit(self, event: AgentProgressEvent) -> None:
        await self._queue.put(event)

Wire QueueProgressSink() as context["progress_sink"] with stream=True for question-driven runs. Use LLMProvider via the container—do not import vendor LLM SDKs from application/gateway code.

A fuller client checklist (event kinds, rollback, versioning, cancellation) is in the developer guide: Agent progress & chat integration (clients).


Using Use Cases Directly

When you need a single operation (e.g. search, quote, profile) instead of a full analysis run, use the use cases from the container. Each is a factory: call container.<use_case>() to get an instance, then await use_case.execute(request).

Available use cases

Use caseMethodPurpose
Search instrumentssearch_instruments_use_case()Search by name or symbol
Get instrumentget_instrument_use_case()Get cached instrument by symbol
Get quoteget_quote_use_case()Current market quote for a symbol
Get historical dataget_historical_data_use_case()OHLCV history for symbol and date range
Get options chainget_options_chain_use_case()Options chain for an underlying symbol
Get fundamentalsget_stock_fundamentals_use_case()Fundamentals for a symbol (financials, ratios)
Analyze instrumentanalyze_instrument_use_case()Progressive instrument analysis (deterministic or question-driven)
Analyze marketanalyze_market_use_case()Progressive market analysis (deterministic or question-driven)
Create profilecreate_profile_use_case()Create analysis profile
Get / list profilesget_profile_use_case(), list_profiles_use_case()Read profiles
Current profileget_current_profile_use_case(), set_current_profile_use_case()Get/set current profile
Delete profiledelete_profile_use_case()Delete a profile

Example: search, quote, historical data, options, fundamentals

from datetime import datetime
 
from copinance_os.infra.di import get_container
from copinance_os.research.workflows.market import (
    SearchInstrumentsRequest,
    GetQuoteRequest,
    GetHistoricalDataRequest,
    GetOptionsChainRequest,
)
from copinance_os.research.workflows.fundamentals import GetStockFundamentalsRequest
 
container = get_container(llm_config=my_llm_config)
 
# Search by name or symbol
search_uc = container.search_instruments_use_case()
search_res = await search_uc.execute(SearchInstrumentsRequest(query="Apple", limit=5))
for s in search_res.instruments:
    print(s.symbol, s.name)
 
# Current quote
quote_uc = container.get_quote_use_case()
quote_res = await quote_uc.execute(GetQuoteRequest(symbol="AAPL"))
# quote_res.quote: dict with current_price, volume, etc.
 
# Historical OHLCV
hist_uc = container.get_historical_data_use_case()
hist_res = await hist_uc.execute(
    GetHistoricalDataRequest(
        symbol="AAPL",
        start_date=datetime(2024, 1, 1),
        end_date=datetime(2024, 12, 31),
        interval="1d",
    )
)
# hist_res.data: list of MarketDataPoint
 
# Options chain
options_uc = container.get_options_chain_use_case()
options_res = await options_uc.execute(
    GetOptionsChainRequest(underlying_symbol="AAPL", expiration_date=None)
)
# options_res.chain: OptionsChain (calls, puts, underlying_price, etc.)
 
# Fundamentals (financial statements, ratios)
fundamentals_uc = container.get_stock_fundamentals_use_case()
fund_res = await fundamentals_uc.execute(
    GetStockFundamentalsRequest(symbol="AAPL", periods=5, period_type="annual")
)
# fund_res.fundamentals: StockFundamentals (income_statements, balance_sheets, ratios, etc.)

Example: progressive analyze

from copinance_os.research.workflows.analyze import (
    AnalyzeInstrumentRequest,
    AnalyzeMarketRequest,
)
from copinance_os.domain.models.market import MarketType
 
# Deterministic equity analysis
instrument_uc = container.analyze_instrument_use_case()
equity_res = await instrument_uc.execute(
    AnalyzeInstrumentRequest(symbol="AAPL")
)
 
# Question-driven options analysis
options_res = await instrument_uc.execute(
    AnalyzeInstrumentRequest(
        symbol="AAPL",
        market_type=MarketType.OPTIONS,
        expiration_date="2026-06-19",
        question="What does skew imply about sentiment?",
    )
    # Multiple expiries: use expiration_dates (optional expiration_date is merged)
    # AnalyzeInstrumentRequest(
    #     symbol="AAPL",
    #     market_type=MarketType.OPTIONS,
    #     expiration_dates=["2026-06-19", "2026-09-18"],
    #     question="How does IV compare across these expiries?",
    # )
)
 
# Deterministic macro / market regime analysis
market_uc = container.analyze_market_use_case()
macro_res = await market_uc.execute(
    AnalyzeMarketRequest(market_index="SPY", lookback_days=90)
)
 
# Question-driven market analysis
market_question_res = await market_uc.execute(
    AnalyzeMarketRequest(
        market_index="SPY",
        question="Is this a risk-on or risk-off regime?",
    )
)

Multi-turn question-driven analysis

This section is Python-only; the CLI cannot supply conversation_history (see the note at the top of this page). Use this when the user’s new question should see prior user/assistant messages from an earlier run (same symbol / market scope). Each LLM provider maps history to its native chat API (Gemini Content list + systemInstruction; OpenAI Chat Completions messages; Ollama /api/chat messages).

  • Set conversation_history to a list of LLMConversationTurn that alternates userassistant → … and ends with assistant.
  • Set question to the new user message only (do not duplicate the latest user turn inside conversation_history).
  • After a successful question-driven run, results["conversation_turns"] may contain the full transcript (including the latest assistant reply) as serializable dicts — reuse them to build the next request’s conversation_history.

conversation_history is invalid in mode=deterministic.

import asyncio
 
from copinance_os.domain.models.llm_conversation import LLMConversationTurn
from copinance_os.infra.di import get_container
from copinance_os.research.workflows.analyze import (
    AnalyzeInstrumentRequest,
    AnalyzeMode,
)
 
async def follow_up_example() -> None:
    container = get_container(load_from_env=True)
    uc = container.analyze_instrument_use_case()
 
    first = await uc.execute(
        AnalyzeInstrumentRequest(
            symbol="AAPL",
            question="What was the most recent closing price?",
            mode=AnalyzeMode.QUESTION_DRIVEN,
        )
    )
    if not first.success or not first.results:
        return
 
    turns = first.results.get("conversation_turns") or []
    history = [LLMConversationTurn.model_validate(t) for t in turns]
 
    second = await uc.execute(
        AnalyzeInstrumentRequest(
            symbol="AAPL",
            question="How does that compare to the 52-week high?",
            mode=AnalyzeMode.QUESTION_DRIVEN,
            conversation_history=history,
        )
    )
    _ = second  # use second.results, second.report, etc.
 
asyncio.run(follow_up_example())

Library API Reference

Use this section as a single reference for every option available when using Copinance OS as a library. All request/response types and container entry points are listed with correct module paths. Python examples on this page use the imports and types named in the tables in this section; they have been checked against the current package layout—if a snippet fails, compare your version to the cited modules (domain.models.*, research.workflows.*, infra.di, ai.llm).

Container entry points

Get the container with get_container(...) from copinance_os.infra.di. You can pass prompt_templates or prompt_manager for custom prompts (see Prompt templates), cache_enabled or cache_manager for cache control (see Cache), and storage_type or storage_path to avoid or customize disk usage (see Storage and Persistence). Then call these methods to obtain use cases, ResearchOrchestrator, analyze runners, and (advanced) analysis_executors:

Entry pointMethodReturnsPurpose
Marketcontainer.search_instruments_use_case()SearchInstrumentsUseCaseSearch by name or symbol
container.get_instrument_use_case()GetInstrumentUseCaseGet cached instrument by symbol
container.get_quote_use_case()GetQuoteUseCaseCurrent quote for a symbol
container.get_historical_data_use_case()GetHistoricalDataUseCaseOHLCV history for symbol/date range
container.get_options_chain_use_case()GetOptionsChainUseCaseOptions chain for underlying
container.get_stock_fundamentals_use_case()GetStockFundamentalsUseCaseFundamentals for a symbol
Analyzecontainer.analyze_instrument_use_case()AnalyzeInstrumentUseCaseProgressive instrument analysis
container.analyze_market_use_case()AnalyzeMarketUseCaseProgressive market analysis
Runners (override these)container.analyze_instrument_runner()AnalyzeInstrumentRunner implDefault: builds Job, calls ResearchOrchestrator.run_job
container.analyze_market_runner()AnalyzeMarketRunner implDefault: builds Job, calls ResearchOrchestrator.run_job
Orchestrationcontainer.research_orchestrator()ResearchOrchestratorPreferred: run_job(job, context) (wraps JobRunner internally)
container.analysis_executors()list[AnalysisExecutor]Registered analysis executors (advanced: validate/execute yourself)
Profilescontainer.create_profile_use_case()CreateProfileUseCaseCreate analysis profile
container.get_profile_use_case()GetProfileUseCaseGet profile by ID
container.list_profiles_use_case()ListProfilesUseCaseList profiles (paginated)
container.get_current_profile_use_case()GetCurrentProfileUseCaseGet current profile
container.set_current_profile_use_case()SetCurrentProfileUseCaseSet current profile
container.delete_profile_use_case()DeleteProfileUseCaseDelete profile
Infrastructurecontainer.market_data_provider()MarketDataProviderMarket data (quote, history, search, options)
container.cache_manager()CacheManagerCache for tool/CLI data

The JobRunner port is not exposed as a container method; customize orchestration via ResearchOrchestrator, executor lists, or a custom Container (see Extending).

Imports: Container from copinance_os.infra.di; request/response types from the modules below.

Market use cases — copinance_os.research.workflows.market

DTOs for quote/history/options/instrument are defined in copinance_os.domain.models.market_requests and re-exported from this workflow module for convenience.

Use caseRequestResponseRequest fields (all optional except noted)
Search instrumentsSearchInstrumentsRequestSearchInstrumentsResponsequery (required), limit (default 10, 1–100), search_mode (InstrumentSearchMode: auto / symbol / general)
Get instrumentGetInstrumentRequestGetInstrumentResponsesymbol (required). Response: instrument: Stock | None
Get quoteGetQuoteRequestGetQuoteResponsesymbol (required). Response: quote: dict, symbol: str
Get historical dataGetHistoricalDataRequestGetHistoricalDataResponsesymbol, start_date, end_date (required; datetime), interval (default "1d"). Response: data: list[MarketDataPoint], symbol
Get options chainGetOptionsChainRequestGetOptionsChainResponseunderlying_symbol (required), expiration_date: str | None (YYYY-MM-DD). Response: chain: OptionsChain, underlying_symbol. With the default container, each OptionContract may include greeks (BSM estimates via QuantLib) when spot and implied vol are available; see Options & Greeks.

Enum: InstrumentSearchMode in the same module (auto, symbol, general).

Analyze use cases — copinance_os.research.workflows.analyze

Request and mode types are defined in copinance_os.domain.models.analysis (AnalyzeInstrumentRequest, AnalyzeMarketRequest, AnalyzeMode, routing helpers). research.workflows.analyze re-exports them next to AnalyzeInstrumentUseCase / AnalyzeMarketUseCase so a single import path still works.

Use caseRequestResponseRequest fields
Analyze instrumentAnalyzeInstrumentRequestRunJobResultsymbol (required), market_type (MarketType.EQUITY / OPTIONS, default EQUITY), timeframe (optional; defaults by market type), question, mode (auto / deterministic / question_driven), conversation_history (optional; multi-turn question-driven only), expiration_date (optional YYYY-MM-DD), expiration_dates (optional list of YYYY-MM-DD; merged with expiration_date), option_side, profile_id, include_prompt_in_results, stream
Analyze marketAnalyzeMarketRequestRunJobResultmarket_index (default "SPY"), timeframe (default MID_TERM), question, mode (auto / deterministic / question_driven), conversation_history (optional; multi-turn question-driven only), lookback_days (default 252, 1–2520), include_vix, include_market_breadth, include_sector_rotation, include_rates, include_credit, include_commodities, include_labor, include_housing, include_manufacturing, include_consumer, include_global, include_advanced, profile_id, include_prompt_in_results, stream

Ports for custom runners: AnalyzeInstrumentRunner and AnalyzeMarketRunner in copinance_os.domain.ports.analysis_execution — each: async def run(self, request: ...Request) -> RunJobResult.

Profile use cases — copinance_os.research.workflows.profile

Use caseRequestResponseRequest fields
Create profileCreateProfileRequestCreateProfileResponsefinancial_literacy (FinancialLiteracy.BEGINNER / INTERMEDIATE / ADVANCED), display_name, preferences: dict[str, str]
Get profileGetProfileRequestGetProfileResponseprofile_id: UUID
List profilesListProfilesRequestListProfilesResponselimit (default 100), offset (default 0)
Get current profileGetCurrentProfileRequestGetCurrentProfileResponse(no fields)
Set current profileSetCurrentProfileRequestSetCurrentProfileResponseprofile_id: UUID | None (None clears current)
Delete profileDeleteProfileRequestDeleteProfileResponseprofile_id: UUID

Type: FinancialLiteracy from copinance_os.domain.models.profile.

Fundamentals use case — copinance_os.research.workflows.fundamentals

Use caseRequestResponseRequest fields
Get fundamentalsGetStockFundamentalsRequestGetStockFundamentalsResponsesymbol (required), periods (default 5), period_type ("annual" or "quarterly", default "annual"). Response: fundamentals: StockFundamentals

Domain types (jobs and results)

TypeModuleUse
Jobcopinance_os.domain.models.jobAnalysis execution context: scope, market_type, instrument_symbol, market_index, timeframe, execution_type, profile_id
RunJobResultcopinance_os.domain.models.jobsuccess, results, error_message, report (AnalysisReport | None), report_exclusion_reason (ReportExclusionReason | None)
AnalysisReportcopinance_os.domain.models.analysis_reportsummary, key_metrics, methodology, assumptions, limitations
ReportExclusionReasoncopinance_os.domain.models.jobe.g. UNKNOWN_EXECUTOR_TYPE when results exist but no report builder is registered
JobScopecopinance_os.domain.models.jobINSTRUMENT, MARKET
JobTimeframecopinance_os.domain.models.jobSHORT_TERM, MID_TERM, LONG_TERM
MarketTypecopinance_os.domain.models.marketEQUITY, OPTIONS
OptionSidecopinance_os.domain.models.marketCALL, PUT, ALL
LLMConversationTurncopinance_os.domain.models.llm_conversationrole (user / assistant), content; used for conversation_history on analyze requests
MarketDataPointcopinance_os.domain.models.marketOHLCV + symbol, timestamp
OptionsChaincopinance_os.domain.models.marketunderlying_symbol, expiration_date, calls, puts, underlying_price, etc.
Stockcopinance_os.domain.models.stockInstrument entity (symbol, name, exchange, sector, …)

Override points

  • Custom analyze execution: Implement AnalyzeInstrumentRunner or AnalyzeMarketRunner and override container.analyze_instrument_runner or container.analyze_market_runner (see Custom executors and runners).
  • Custom job orchestration: Implement the JobRunner port (copinance_os.domain.ports.analysis_execution) and wire it when building a ResearchOrchestrator, or use a custom Container / set_container() so your app owns the wiring. The stock container does not expose job_runner() as a getter.

Using analyze without jobs

You use analyze only through the use cases above when you want a clean library API; you typically do not build Job objects yourself. The framework builds jobs internally (via default runners) when needed. If you run jobs manually, prefer ResearchOrchestrator.run_job over calling JobRunner directly unless you are replacing the port.

Custom executors and runners

By default, analyze uses runners that build a Job and call ResearchOrchestrator (which delegates to JobRunner and the built-in analysis executors). You can replace this with your own runner implementation or run analysis without the job abstraction:

  1. Implement the runner port for the flow you want to customize:

    • Instrument analysis: AnalyzeInstrumentRunner in copinance_os.domain.ports.analysis_execution — implement async def run(self, request: AnalyzeInstrumentRequest) -> RunJobResult.
    • Market analysis: AnalyzeMarketRunner in the same module — implement async def run(self, request: AnalyzeMarketRequest) -> RunJobResult.
  2. Override the runner in the container so the use case uses your implementation:

    from dependency_injector import providers
    from copinance_os.domain.ports.analysis_execution import AnalyzeInstrumentRunner
    from copinance_os.domain.models.analysis import AnalyzeInstrumentRequest
    from copinance_os.domain.models.job import RunJobResult
    from copinance_os.infra.di import get_container
     
    class MyAnalyzeInstrumentRunner(AnalyzeInstrumentRunner):
        async def run(self, request: AnalyzeInstrumentRequest) -> RunJobResult:
            # Your executor or direct implementation; no Job
            return RunJobResult(success=True, results={"analysis": "..."}, error_message=None)
     
    container = get_container(llm_config=my_llm_config)
    container.analyze_instrument_runner.override(providers.Factory(MyAnalyzeInstrumentRunner))
    # analyze_instrument_use_case() will now use MyAnalyzeInstrumentRunner
  3. Job-based execution: For queues or batch, implement JobRunner and inject it into a ResearchOrchestrator you construct yourself, or replace the orchestrator/runners on a custom container.

Imports: Market request/response DTOs also live in copinance_os.domain.models.market_requests (re-exported from research.workflows.market). Analyze requests: domain.models.analysis (re-exported from research.workflows.analyze).


Analysis Profiles

Profiles store financial literacy level and preferences; they are not user accounts. Your app owns users and auth; you map users to profile IDs if you want.

  • Create a profile: use create_profile_use_case() with literacy (e.g. beginner, intermediate, advanced) and optional name.
  • Attach to a job: set job.profile_id to the profile UUID. The runner will pass literacy and preferences into the analysis context.
  • Current profile: the container has a “current profile” (used by CLI). In a library you can ignore it and always set profile_id on the job, or use set_current_profile_use_case() and leave profile_id unset to use current.

Storage and Persistence

The .copinance directory (e.g. data/, equities.json) is created by the storage layer used by repositories (instrument lists, profiles, etc.), not by the cache. Cache (when enabled) also uses the same storage path for tool and prompt cache files.

Default behavior: Storage is file-based (settings COPINANCEOS_STORAGE_TYPE=file, COPINANCEOS_STORAGE_PATH=.copinance), so the CLI and any code using the default container will create .copinance on disk unless overridden.

To avoid creating .copinance when integrating the library (e.g. in a web backend where you want no on-disk usage):

  • In code: Pass storage_type="memory" to get_container(). Repositories then use in-memory storage and no directory is created. You can also pass cache_enabled=False if you want every request to use fresh data.

    container = get_container(
        llm_config=llm_config,
        load_from_env=False,
        cache_enabled=False,
        storage_type="memory",
    )
  • Via environment: Set COPINANCEOS_STORAGE_TYPE=memory so the container uses in-memory storage without changing code.

For a long-running app you may want file persistence: keep the default (or set storage_type="file" and optionally storage_path). In-memory storage loses data when the process exits.

The library does not impose a database. You can implement the repository and storage ports (see Developer Guide) and wire them into the container if you need PostgreSQL, MongoDB, etc.


Custom Orchestration

The default JobRunner (used inside ResearchOrchestrator.run_job) finds the executor for the job’s execution_type, builds context (e.g. profile), and runs the executor once. You can:

  • Replace job orchestration: implement the JobRunner port (async def run(job, context) -> RunJobResult) and construct a ResearchOrchestrator (or custom façade) with it—there is no container.job_runner() accessor on the stock Container.
  • Use analysis executors yourself: resolve container.analysis_executors() and call validate(job) / execute(job, context) on the executor that validates.

See Architecture and Extending for ports and extension points.


Error Handling

  • RunJobResult.success == True: Inspect result.results and optional result.report (AnalysisReport) for the structured envelope. If result.report is None but you expected one, check result.report_exclusion_reason (e.g. unknown executor type without a registered report builder).
  • RunJobResult.success == False: Check result.error_message. The analysis or an underlying service failed.
  • ExecutorNotFoundError: No executor registered for the job’s execution type. Ensure the container includes the right executors (e.g. LLM config for question-driven analysis).
  • DomainError: Base type for domain-level errors (e.g. validation, business rule).

Catch these in your app and map them to your HTTP codes or user messages as needed.


Complete Example

import asyncio
from copinance_os.ai.llm.config import LLMConfig
from copinance_os.infra.di import get_container
from copinance_os.domain.models.job import Job, JobScope, JobTimeframe
from copinance_os.domain.models.market import MarketType
 
async def main():
    llm_config = LLMConfig(
        provider="gemini",
        api_key="your-api-key",
        model="gemini-1.5-pro",
    )
    container = get_container(
        llm_config=llm_config,
        fred_api_key="your-fred-key",  # optional
    )
    orchestrator = container.research_orchestrator()
 
    # Equity analysis
    job = Job(
        scope=JobScope.INSTRUMENT,
        market_type=MarketType.EQUITY,
        instrument_symbol="AAPL",
        timeframe=JobTimeframe.MID_TERM,
        execution_type="deterministic_instrument_analysis",
    )
    result = await orchestrator.run_job(job, {})
    if result.success:
        print("Equity result keys:", result.results.keys())
    else:
        print("Error:", result.error_message)
 
    # Optional: use cases for finer control
    from copinance_os.research.workflows.market import SearchInstrumentsRequest
    search_uc = container.search_instruments_use_case()
    search_res = await search_uc.execute(SearchInstrumentsRequest(query="Tesla", limit=5))
    # Use search_res.instruments...
 
asyncio.run(main())

Next Steps