Skip to content

OpenAI

Introduction

The OpenAIModel class is a wrapper around the OpenAI API. It implements the BaseAIModel interface.

Chat Completions API vs Responses API

The OpenAI API has two different ways to interact with the model 1:

  1. The Chat Completions API , introduced in 2022
  2. The Responses API , introduced in 2024

By default, OpenAIModel uses the Responses API, but we support the Chat Completions API as well. You can choose to use the Chat Completions API by setting the uses_chat_completion_api parameter to True when initializing the model.

Note that some features are not supported by the Chat Completions API, such as computer use.

Examples

Don't forget the API key!

For the following examples to work, you need to set the OPENAI_API_KEY environment variable. We also support .env files, so you can use that instead of the environment variable. You can also pass the API key as an argument to the constructor.

Simple call

from conatus.models import OpenAIModel

model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.

Call with actions

from conatus import action, AIPrompt
from conatus.models import OpenAIModel

@action
def multiply_two_numbers(a: int, b: int) -> int:
    return a * b

model = OpenAIModel()
prompt = AIPrompt(
    user="What is 2219 times 8393?",
    actions=[multiply_two_numbers],
)
response = model.call(prompt)
# > AIResponse(...)

Switch between models

from conatus.models import OpenAIModel

model = OpenAIModel(model_name="gpt-4o-mini")
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067

model = OpenAIModel(model_name="o3-mini")
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067

model = OpenAIModel(model_type="reasoning")
assert model.model_config.model_name == "o3"
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067

Configuration

You can configure the model using the model_config argument either at initialization or later on:

from conatus.models import OpenAIModel
from conatus.models.open_ai import OpenAIModelConfig

# This works
model = OpenAIModel(model_config={"temperature": 0.5})
assert model.model_config.temperature == 0.5

# This is essentially equivalent
model = OpenAIModel(model_config=OpenAIModelConfig(temperature=0.7))
assert model.model_config.temperature == 0.7

# You can also define the configuration as you call the model
# We recommend passing a dictionary here, so that users don't unintentionally
# re-establish default values.
response = model.simple_call(
    "What is the world's oldest newspaper still in circulation?",
    model_config={"temperature": 0.9},
)
# > The world's oldest newspaper still in circulation is the public
# > record from the government of Sweden.

Message conversion methods are omitted on this page

The OpenAIModel class contains a number of private methods related to the conversion of messages from their OpenAI specification to AIMessage objects, and vice-versa.

These methods are not documented on this website, but you can look at the source code if you're curious. This might be of particular interest if you want to implement your own BaseAIModel subclass.

For more information on how to implement your own model, see the "How-to: Add a new AI provider"

Model

conatus.models.open_ai.open_ai.OpenAIModel dataclass

OpenAIModel(
    model_config: (
        OpenAIModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    client: AsyncOpenAI | None = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None,
    uses_chat_completion_api: bool = False
)

Bases: BaseAIModel

OpenAI model.

PARAMETER DESCRIPTION
model_config

The configuration for the OpenAI model. This can be a OpenAIModelConfig object, a ModelConfig object, or a dictionary.

TYPE: OpenAIModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

client

The client to use. If not provided, a new client will be created.

TYPE: AsyncOpenAI | None DEFAULT: None

get_default_client_if_not_given

Whether to get the default client if not provided.

TYPE: bool DEFAULT: True

api_key

The API key to use. If not provided, it will be read from the OPENAI_API_KEY environment variable. If client is provided, api_key will be ignored.

TYPE: str | None DEFAULT: None

model_name

The name of the model to use. If not provided, it will be set to the default model name.

TYPE: str | None DEFAULT: None

model_type

The type of model to use. This is used to determine the model name only if model_name is not provided. If provided, overrides any model type in the config.

TYPE: ModelType | None DEFAULT: None

uses_chat_completion_api

Whether the model uses the Chat Completions API. If not provided, it will be set to False.

TYPE: bool DEFAULT: False

Source code in conatus/models/open_ai/open_ai.py
def __init__(
    self,
    model_config: OpenAIModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    client: AsyncOpenAI | None = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None,
    uses_chat_completion_api: bool = False,
) -> None:
    """Initialize the OpenAI model.

    Args:
        model_config: The configuration for the OpenAI model. This can be a
            [`OpenAIModelConfig`][conatus.models.open_ai.OpenAIModelConfig]
            object, a [`ModelConfig`][conatus.models.base.ModelConfig]
            object, or a dictionary.
        client: The client to use. If not provided, a new client will be
            created.
        get_default_client_if_not_given: Whether to get the default client
            if not provided.
        api_key: The API key to use. If not provided, it will be read from
            the `OPENAI_API_KEY` environment variable. If `client` is
            provided, `api_key` will be ignored.
        model_name: The name of the model to use. If not provided, it will
            be set to the default model name.
        model_type: The type of model to use. This is used to determine the
            model name only if `model_name` is not provided. If provided,
            overrides any model type in the config.
        uses_chat_completion_api: Whether the model uses the Chat
            Completions API. If not provided, it will be set to `False`.
    """
    super().__init__(
        model_config=model_config,
        client=client,
        api_key=api_key,
        model_name=model_name,
        model_type=model_type,
        get_default_client_if_not_given=get_default_client_if_not_given,
    )
    self.uses_chat_completion_api = uses_chat_completion_api
    logger.info(
        "Initializing OpenAI model: %s", self.model_config.model_name
    )
    if self.model_config.use_mock:
        self._load_mock_functions()

model_config instance-attribute

model_config: OpenAIModelConfig

The configuration for the OpenAI model.

client instance-attribute

client: AsyncOpenAI

The OpenAI client.

provider class-attribute instance-attribute

provider: ProviderName = 'openai'

The provider name.

api_key_env_variable class-attribute instance-attribute

api_key_env_variable: str = 'OPENAI_API_KEY'

The environment variable that contains the API key.

uses_chat_completion_api instance-attribute

uses_chat_completion_api: bool = uses_chat_completion_api

Whether the model uses the chat completions API.

False by default, meaning that the model uses the responses API.

model_config_cls instance-attribute

model_config_cls: type[ModelConfig]

The class of the model configuration.

config property

config: ModelConfig

The configuration for the model.

This is a convenience property for the model_config attribute.

default_model_name classmethod

default_model_name(
    model_type: ModelType | None,
) -> ModelName | None

Get the default model name for the OpenAI model.

PARAMETER DESCRIPTION
model_type

The type of model to use.

TYPE: ModelType | None

RETURNS DESCRIPTION
ModelName | None

The default model name for the OpenAI model.

Source code in conatus/models/open_ai/open_ai.py
@classmethod
@override
def default_model_name(
    cls, model_type: ModelType | None
) -> ModelName | None:
    """Get the default model name for the OpenAI model.

    Args:
        model_type: The type of model to use.

    Returns:
        The default model name for the OpenAI model.
    """
    if model_type is None:
        return None
    match model_type:
        case "chat":
            return "openai:gpt-4.1"
        case "execution":
            return "openai:o4-mini"
        case "computer_use":
            return "openai:computer-use-preview"
        case "reasoning":
            return "openai:o3"

default_client

default_client(
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType
) -> AsyncOpenAI

Return the default client for the OpenAI model.

PARAMETER DESCRIPTION
model_config

The configuration for the OpenAI model.

TYPE: ModelConfig

api_key

The API key for the OpenAI model. Takes precedence over the API key in the model config.

TYPE: str | None

**kwargs

Additional arguments to pass to the OpenAI client.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AsyncOpenAI

The OpenAI client.

Source code in conatus/models/open_ai/open_ai.py
@override
def default_client(
    self,
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType,
) -> AsyncOpenAI:
    """Return the default client for the OpenAI model.

    Args:
        model_config: The configuration for the OpenAI model.
        api_key: The API key for the OpenAI model. Takes precedence over
            the API key in the model config.
        **kwargs: Additional arguments to pass to the OpenAI client.

    Returns:
        The OpenAI client.
    """
    api_key = api_key or model_config.api_key or self.get_api_key()
    return AsyncOpenAI(api_key="fake" if model_config.use_mock else api_key)

default_config

default_config() -> OpenAIModelConfig

Return the default configuration for the model.

Source code in conatus/models/open_ai/open_ai.py
@override
def default_config(self) -> OpenAIModelConfig:
    """Return the default configuration for the model."""
    return OpenAIModelConfig()

__del__

__del__() -> None

Delete the model.

Source code in conatus/models/open_ai/open_ai.py
def __del__(self) -> None:
    """Delete the model."""
    if getattr(self, "client", None):
        with contextlib.suppress(Exception):
            asyncio.run(self.client.close())

call

call(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        OpenAIModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the OpenAI model using the standardized prompt and response.

from conatus import AIPrompt
from conatus.models.open_ai import OpenAIModel

model = OpenAIModel()
prompt = AIPrompt("Hello, how are you?")
response = model.call(prompt)
PARAMETER DESCRIPTION
prompt

The prompt to send to the OpenAI model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the OpenAI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: OpenAIModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional arguments to pass to the OpenAI model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the OpenAI model.

Source code in conatus/models/open_ai/open_ai.py
@override
def call(  # type: ignore[override]
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: OpenAIModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the OpenAI model using the standardized prompt and response.

    ```python
    from conatus import AIPrompt
    from conatus.models.open_ai import OpenAIModel

    model = OpenAIModel()
    prompt = AIPrompt("Hello, how are you?")
    response = model.call(prompt)
    ```

    Args:
        prompt: The prompt to send to the OpenAI model.
        model_config: The configuration to use for the OpenAI model.
            Passing a dictionary is recommended, so that users don't
            unintentionally re-establish default values.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback: A callback for debugging purposes. This
            callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string.
        **kwargs: Additional arguments to pass to the OpenAI model.

    Returns:
        The response from the OpenAI model.
    """
    return run_async(
        self.acall(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback=response_log_callback,
            **kwargs,
        )
    )

acall async

acall(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        OpenAIModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the OpenAI model using the standardized prompt and response.

For its async counterpart, see acall .

PARAMETER DESCRIPTION
prompt

The prompt to send to the OpenAI model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the OpenAI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: OpenAIModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the OpenAI model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the OpenAI model.

Source code in conatus/models/open_ai/open_ai.py
@override
async def acall(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: OpenAIModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback: Callable[[str], None] | None = None,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the OpenAI model using the standardized prompt and response.

    For its async counterpart, see [`acall`
    ][conatus.models.open_ai.OpenAIModel.acall].

    Args:
        prompt: The prompt to send to the OpenAI model.
        model_config: The configuration to use for the OpenAI model.
            Passing a dictionary is recommended, so that users don't
            unintentionally re-establish default values.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback: A callback for debugging purposes. This
            callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the OpenAI model.

    Returns:
        The response from the OpenAI model.
    """
    if self.uses_chat_completion_api:
        return await self._acall_cc(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback=response_log_callback,
        )
    return await self._acall_response(
        prompt,
        model_config,
        printing_mixin_cls=printing_mixin_cls,
        prompt_log_callback=prompt_log_callback,
        response_log_callback=response_log_callback,
    )

call_stream

call_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        OpenAIModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the OpenAI model using the standardized prompt and response.

For its async counterpart, see acall_stream .

PARAMETER DESCRIPTION
prompt

The prompt to send to the OpenAI model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the OpenAI model.

TYPE: OpenAIModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the OpenAI model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the OpenAI model.

Source code in conatus/models/open_ai/open_ai.py
@override
def call_stream(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: OpenAIModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the OpenAI model using the standardized prompt and response.

    For its async counterpart, see [`acall_stream`
    ][conatus.models.open_ai.OpenAIModel.acall_stream].

    Args:
        prompt: The prompt to send to the OpenAI model.
        model_config: The configuration to use for the OpenAI model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information
            (e.g. the response, the model name, the usage, etc.) as a
            JSON string. Note that this callback is called for each chunk
            of the response, and figures it out on the backend.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the OpenAI model.

    Returns:
        The response from the OpenAI model.
    """
    return run_async(
        self.acall_stream(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback_stream=response_log_callback_stream,
            **kwargs,
        )
    )

acall_stream async

acall_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        OpenAIModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the OpenAI model using the standardized prompt and response.

For its sync counterpart, see call_stream .

PARAMETER DESCRIPTION
prompt

The prompt to send to the OpenAI model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the OpenAI model.

TYPE: OpenAIModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the OpenAI model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the OpenAI model.

Source code in conatus/models/open_ai/open_ai.py
@override
async def acall_stream(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: OpenAIModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the OpenAI model using the standardized prompt and response.

    For its sync counterpart, see [`call_stream`
    ][conatus.models.open_ai.OpenAIModel.call_stream].

    Args:
        prompt: The prompt to send to the OpenAI model.
        model_config: The configuration to use for the OpenAI model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information
            (e.g. the response, the model name, the usage, etc.) as a
            JSON string. Note that this callback is called for each chunk
            of the response, and figures it out on the backend.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the OpenAI model.

    Returns:
        The response from the OpenAI model.
    """
    if self.uses_chat_completion_api:
        return await self._acall_stream_cc(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback_stream=response_log_callback_stream,
        )
    return await self._acall_stream_response(
        prompt,
        model_config,
        printing_mixin_cls=printing_mixin_cls,
        prompt_log_callback=prompt_log_callback,
        response_log_callback_stream=response_log_callback_stream,
        **kwargs,
    )

with_config

with_config(
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False
) -> Self

Return a new instance of the model with the given configuration.

This is useful for quickly creating a new model without having to instantiate a new client.

from conatus.models import OpenAIModel
from conatus.models.config import ModelConfig

model = OpenAIModel()

model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

# Note that this also works if you pass a dictionary.
model_with_config = model.with_config({"model_name": "gpt-4o"})

assert model_with_config.config.model_name == "gpt-4o"
assert model_with_config.client == model.client
PARAMETER DESCRIPTION
model_config

The configuration for the new model.

TYPE: ModelConfig | ModelConfigTD | None

ignore_current_config

Whether to ignore the current configuration. If True, the new configuration will replace the current configuration. If False, the new configuration will be merged with the current configuration.

TYPE: bool DEFAULT: False

inplace

Whether to modify the current instance in place. If True, the current instance will be modified in place. If False, a new instance will be returned.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Self

A new instance of the model with the given configuration.

Source code in conatus/models/base.py
def with_config(
    self,
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False,
) -> Self:
    """Return a new instance of the model with the given configuration.

    This is useful for quickly creating a new model without having to
    instantiate a new client.

    ```python
    from conatus.models import OpenAIModel
    from conatus.models.config import ModelConfig

    model = OpenAIModel()

    model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

    # Note that this also works if you pass a dictionary.
    model_with_config = model.with_config({"model_name": "gpt-4o"})

    assert model_with_config.config.model_name == "gpt-4o"
    assert model_with_config.client == model.client
    ```

    Args:
        model_config: The configuration for the new model.
        ignore_current_config: Whether to ignore the current configuration.
            If `True`, the new configuration will replace the current
            configuration. If `False`, the new configuration will be merged
            with the current configuration.
        inplace: Whether to modify the current instance in place. If `True`,
            the current instance will be modified in place. If `False`, a
            new instance will be returned.

    Returns:
        A new instance of the model with the given configuration.
    """
    new_config = (
        self.model_config.apply_config(
            new_config=model_config,
            inplace=False,
        )
        if not ignore_current_config
        else type(self.model_config).from_dict_instance_or_none(
            model_config
        )
    )
    if inplace:
        self.model_config = new_config
        return self
    return type(self)(
        model_config=new_config,
        client=self.client,
    )

get_api_key

get_api_key() -> str

Get the API key for the model.

This function should be implemented to retrieve environment variables.

RETURNS DESCRIPTION
str

The API key.

RAISES DESCRIPTION
AIModelAPIKeyMissingError

If the API key is not found in the environment variables.

ValueError

If the API key is not set in the class attribute.

Source code in conatus/models/base.py
def get_api_key(self) -> str:
    """Get the API key for the model.

    This function should be implemented to retrieve environment variables.

    Returns:
        The API key.

    Raises:
        AIModelAPIKeyMissingError: If the API key is not found in the
            environment variables.
        ValueError: If the API key is not set in the class attribute.
    """
    do_load_dotenv = (
        os.environ.get("TEST_DO_NOT_LOAD_DOTENV", "false").lower() != "true"
    )
    if do_load_dotenv and (
        "PYTEST_CURRENT_TEST" not in os.environ
    ):  # pragma: no branch
        _ = load_dotenv()  # pragma: no cover
    if getattr(self, "api_key_env_variable", None) is None:
        msg = (
            "You need to set the `api_key_env_variable` class attribute "
            "in the subclass.\n"
        )
        raise ValueError(msg)
    if self.api_key_env_variable not in os.environ:
        msg = (
            f"You need to set the {self.api_key_env_variable} "
            "environment variable.\n"
        )
        raise AIModelAPIKeyMissingError(msg)

    return os.environ[self.api_key_env_variable]

respawn_client

respawn_client() -> None

Respawn the client.

This method is used to respawn the client. It is mostly used so that we can refresh the client, which might be associated with an incompatible event loop.

Source code in conatus/models/base.py
def respawn_client(self) -> None:
    """Respawn the client.

    This method is used to respawn the client. It is mostly used so that
    we can refresh the client, which might be associated with an
    incompatible event loop.
    """
    with contextlib.suppress(RuntimeError):
        del self.client
    # We only cover this part in testing
    if (
        os.environ.get("ALWAYS_USE_MOCK", "false").lower() == "true"
        or self.model_config.use_mock
    ):  # pragma: no branch
        self.model_config.use_mock = True
        logger.info("Using mock client for %s", self.__class__.__name__)
        self.client = None
        return
    self.client = self.default_client(  # pragma: no cover
        model_config=self.model_config,
        api_key=self.model_config.api_key or self.get_api_key(),
    )

simple_call

simple_call(
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False
) -> str

Simple call to the AI model.

This is a convenience method for the call method.

from conatus.models import OpenAIModel

model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: str

model_config

The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

stream

Whether to stream the response. If True, the response will be streamed to the user. If False, the response will be returned as a string.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
str

The response from the AI model.

Source code in conatus/models/base.py
def simple_call(
    self,
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False,
) -> str:
    """Simple call to the AI model.

    This is a convenience method for the `call` method.

    ```python
    from conatus.models import OpenAIModel

    model = OpenAIModel()
    q = "Which US state has never recorded temperatures below 0°F?"
    response = model.simple_call(q)
    # > That would be Hawaii.
    ```

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model. Passing a
            dictionary is recommended, so that users don't unintentionally
            re-establish default values.
        stream: Whether to stream the response. If `True`, the response
            will be streamed to the user. If `False`, the response will
            be returned as a string.

    Returns:
        The response from the AI model.
    """
    ai_response = (
        self.call(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
        if not stream
        else self.call_stream(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
    )
    return ai_response.all_text or "<empty response>"

Configuration class

There are three relevant classes for the configuration of the OpenAI model:

  • OpenAIModelConfig, which handles all the configuration values for the OpenAI model.
  • OpenAIModelCCSpec, which handles the configuration values that are specific to the Chat Completions API. Note here that this is a TypedDict, because we need to pass them as keyword arguments to the OpenAI API.
  • OpenAIModelResponseSpec, which handles the configuration values that are specific to the Responses API. Note here that this is a TypedDict, because we need to pass them as keyword arguments to the OpenAI API.

conatus.models.open_ai.open_ai.OpenAIModelConfig dataclass

OpenAIModelConfig(
    not_given_sentinel: object = NOT_GIVEN,
    api_key: OptionalArg[str] = CTUS_NOT_GIVEN,
    model_name: str = DEFAULT_OPENAI_MODEL_NAME,
    max_tokens: int = DEFAULT_OPENAI_MAX_TOKENS,
    stdout_mode: Literal[
        "normal", "preview", "silent"
    ] = "preview",
    temperature: float | NotGiven = NOT_GIVEN,
    computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN,
    use_mock: bool = DEFAULT_OPENAI_USE_MOCK,
    only_pass_new_messages: OptionalArg[
        bool
    ] = CTUS_NOT_GIVEN,
    previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN,
    truncation: Literal["auto", "disabled"] = "auto",
    timeout: float | NotGiven | None = NOT_GIVEN,
    mock_stream_cc_fn: (
        IteratorAsyncFn[ChatCompletionChunk] | None
    ) = None,
    mock_stream_response_fn: (
        IteratorAsyncFn[ResponseStreamEvent] | None
    ) = None,
    mock_cc_fn: AsyncFn[ChatCompletion] | None = None,
    mock_response_fn: AsyncFn[Response] | None = None,
    reasoning_effort: (
        Literal["low", "medium", "high"] | None
    ) = None,
    reasoning_summary: (
        Literal["auto", "concise", "detailed"] | None
    ) = None,
)

Bases: ModelConfig

The configuration for an OpenAI model, with defaults.

not_given_sentinel class-attribute instance-attribute

not_given_sentinel: object = NOT_GIVEN

The sentinel value for not given.

model_name class-attribute instance-attribute

The name of the model to use.

max_tokens class-attribute instance-attribute

The maximum number of tokens to use.

temperature class-attribute instance-attribute

temperature: float | NotGiven = NOT_GIVEN

The temperature to use.

use_mock class-attribute instance-attribute

Whether to use a mock response.

If so, we will use MockChatCompletion objects to mock the response.

timeout class-attribute instance-attribute

timeout: float | NotGiven | None = NOT_GIVEN

The timeout to use.

stdout_mode class-attribute instance-attribute

stdout_mode: Literal["normal", "preview", "silent"] = (
    "preview"
)

The mode to use for the standard output.

  • 'normal': Notify the user that we're waiting for a response, and then that we're receiving the response, displaying the number of chunks received so far.
  • 'preview': Preview the response with a fancy output that updates as the response chunks are received. Only works if the response is a stream. If preview is set and the response is not a stream, it will default to 'normal'.
  • 'silent': Do not print anything to the standard output.

Note that if we detect that we are running in a non TTY environment, we will use a special mode called 'non_tty', unless the user asked for 'silent'.

truncation class-attribute instance-attribute

truncation: Literal['auto', 'disabled'] = 'auto'

The truncation to use.

mock_stream_cc_fn class-attribute instance-attribute

mock_stream_cc_fn: (
    IteratorAsyncFn[ChatCompletionChunk] | None
) = None

The function to use to mock the stream.

Note that if it passed, we will NOT call the OpenAI API.

mock_stream_response_fn class-attribute instance-attribute

mock_stream_response_fn: (
    IteratorAsyncFn[ResponseStreamEvent] | None
) = None

The function to use to mock the response.

Note that if it passed, we will NOT call the OpenAI API.

mock_cc_fn class-attribute instance-attribute

mock_cc_fn: AsyncFn[ChatCompletion] | None = None

The function to use to mock the response.

Note that if it passed, we will NOT call the OpenAI API.

mock_response_fn class-attribute instance-attribute

mock_response_fn: AsyncFn[Response] | None = None

The function to use to mock the response.

Note that if it passed, we will NOT call the OpenAI API.

reasoning_effort class-attribute instance-attribute

reasoning_effort: (
    Literal["low", "medium", "high"] | None
) = None

The reasoning effort to use.

Only used if the model name is a reasoning model.

reasoning_summary class-attribute instance-attribute

reasoning_summary: (
    Literal["auto", "concise", "detailed"] | None
) = None

The reasoning summary to use.

Only used if the model name is a reasoning model.

api_key class-attribute instance-attribute

The API key to use, if any.

If not provided, the API key will be taken from the environment variable specified in the api_key_env_variable attribute of the model.

computer_use_mode class-attribute instance-attribute

computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN

Whether to use the computer use mode.

If set to True, the model will be configured to use the computer use mode.

only_pass_new_messages class-attribute instance-attribute

only_pass_new_messages: OptionalArg[bool] = CTUS_NOT_GIVEN

Whether to only pass new messages to the model.

If set to True, the model will only pass new messages to the model, and not the entire history. This is useful for "stateful" APIs, where the history is not needed.

previous_messages_id class-attribute instance-attribute

previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN

The ID of the previous messages.

This is useful for "stateful" APIs, where the history is not needed. This should only be used if only_pass_new_messages is True.

get_kwargs

get_kwargs(api_used: Literal['cc']) -> OpenAIModelCCSpec
get_kwargs(
    api_used: Literal["response"],
) -> OpenAIModelResponseSpec
get_kwargs(
    api_used: Literal["cc", "response"],
) -> OpenAIModelCCSpec | OpenAIModelResponseSpec

Wrapper around the to_kwargs method.

PARAMETER DESCRIPTION
api_used

The API to use.

TYPE: Literal['cc', 'response']

RETURNS DESCRIPTION
OpenAIModelCCSpec | OpenAIModelResponseSpec

The keyword arguments.

Source code in conatus/models/open_ai/open_ai.py
def get_kwargs(
    self, api_used: Literal["cc", "response"]
) -> OpenAIModelCCSpec | OpenAIModelResponseSpec:
    """Wrapper around the `to_kwargs` method.

    Args:
        api_used: The API to use.

    Returns:
        The keyword arguments.
    """
    kwargs_to_pass: OpenAIModelCCSpec | OpenAIModelResponseSpec
    if api_used == "cc":
        kwargs_to_pass = super().to_kwargs(
            specification=OpenAIModelCCSpec,
            not_given_sentinel=OAI_NOT_GIVEN,
        )
        if (
            self.model_name in _REASONING_MODEL_NAMES
            and "max_tokens" in kwargs_to_pass
        ):
            kwargs_to_pass["max_completion_tokens"] = kwargs_to_pass[
                "max_tokens"
            ]
            kwargs_to_pass["max_tokens"] = OAI_NOT_GIVEN
    else:
        kwargs_to_pass = super().to_kwargs(
            specification=OpenAIModelResponseSpec,
            not_given_sentinel=OAI_NOT_GIVEN,
            argument_mapping={
                "max_tokens": "max_output_tokens",
                "previous_messages_id": "previous_response_id",
            },
        )
        # Creating the reasoning dictionary
        reasoning_effort_not_none = self.reasoning_effort is not None
        reasoning_summary_not_none = self.reasoning_summary is not None
        if reasoning_effort_not_none or reasoning_summary_not_none:
            reasoning_dict = Reasoning()
            if reasoning_effort_not_none:
                reasoning_dict["effort"] = self.reasoning_effort
            if reasoning_summary_not_none:
                reasoning_dict["summary"] = self.reasoning_summary
            kwargs_to_pass["reasoning"] = reasoning_dict
    return kwargs_to_pass

to_kwargs

to_kwargs(
    specification: None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD
to_kwargs(
    specification: type[TDSpec],
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> TDSpec
to_kwargs(
    specification: type[TDSpec] | None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec

Return the configuration as a dictionary.

You can provide a specification, which is a dictionary that matches the arguments expected by the provider. If a specification is provided, the method will return a dictionary that matches the specification (i.e. with only the keys that are expected by the provider).

You can also provide a not_given_sentinel, which is an object that is used to represent a missing argument. If we encounter this sentinel object, we will not include it in the returned dictionary.

Example

Using a specification
from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
from openai import NOT_GIVEN

model = OpenAIModel()

args_to_pass = model.config.to_kwargs(
    specification=OpenAIModelCCSpec,
    not_given_sentinel=NOT_GIVEN,
)

assert args_to_pass == {'max_tokens': 4096}

# And now you can do something like:
# response = self.client.chat.completions.create(
#         messages=messages,
#         **args_to_pass
#  )
Using an argument mapping
from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
from openai import NOT_GIVEN

model = OpenAIModel()

args_to_pass = model.config.to_kwargs(
    specification=OpenAIModelResponseSpec,
    argument_mapping={"max_tokens": "max_output_tokens"},
    not_given_sentinel=NOT_GIVEN,
)

assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
PARAMETER DESCRIPTION
specification

The specification to use. This should be a TypedDict; if it's not, the method will throw a TypeError. If no specification is provided, the method will return all the keys of the configuration.

TYPE: type[TDSpec] | None DEFAULT: None

not_given_sentinel

The sentinel object to use.

TYPE: object DEFAULT: CTUS_NOT_GIVEN

argument_mapping

A dictionary that maps the keys of the configuration to the keys of the provider. The mapping is of the form {original_key: new_key, ...}.

TYPE: dict[str, str] | None DEFAULT: None

RETURNS DESCRIPTION
ModelConfigTD | TDSpec

The configuration as a dictionary.

RAISES DESCRIPTION
TypeError

If the specification is not a TypedDict .

Source code in conatus/models/config.py
def to_kwargs(
    self,
    specification: type[TDSpec] | None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec:
    """Return the configuration as a dictionary.

    You can provide a specification, which is a dictionary that
    matches the arguments expected by the provider. If a specification
    is provided, the method will return a dictionary that matches the
    specification (i.e. with only the keys that are expected by the
    provider).

    You can also provide a `not_given_sentinel`, which is an object
    that is used to represent a missing argument. If we encounter this
    sentinel object, we will not include it in the returned dictionary.

    # Example

    ## Using a specification

    ```python
    from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
    from openai import NOT_GIVEN

    model = OpenAIModel()

    args_to_pass = model.config.to_kwargs(
        specification=OpenAIModelCCSpec,
        not_given_sentinel=NOT_GIVEN,
    )

    assert args_to_pass == {'max_tokens': 4096}

    # And now you can do something like:
    # response = self.client.chat.completions.create(
    #         messages=messages,
    #         **args_to_pass
    #  )
    ```

    ## Using an argument mapping

    ```python
    from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
    from openai import NOT_GIVEN

    model = OpenAIModel()

    args_to_pass = model.config.to_kwargs(
        specification=OpenAIModelResponseSpec,
        argument_mapping={"max_tokens": "max_output_tokens"},
        not_given_sentinel=NOT_GIVEN,
    )

    assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
    ```

    Args:
        specification: The specification to use. This should be a
            [`TypedDict`][typing.TypedDict]; if it's not, the method will
            throw a `TypeError`. If no specification is provided, the
            method will return all the keys of the configuration.
        not_given_sentinel: The sentinel object to use.
        argument_mapping: A dictionary that maps the keys of the
            configuration to the keys of the provider. The mapping is
            of the form `{original_key: new_key, ...}`.

    Returns:
        The configuration as a dictionary.

    Raises:
        TypeError: If the specification is not a [`TypedDict`
            ][typing.TypedDict].
    """
    keys: list[str]
    not_given_sentinels = {
        CTUS_NOT_GIVEN,
        self.not_given_sentinel,
        not_given_sentinel,
    }
    # If a specification exists, we assume it's a TypedDict,
    # and extract the optional and required keys.
    if specification is not None:
        optional_keys = getattr(specification, "__optional_keys__", None)
        required_keys = getattr(specification, "__required_keys__", None)
        if optional_keys is None or required_keys is None:
            msg = "The specification must be a TypedDict."
            raise TypeError(msg)
        keys = [
            *(cast("frozenset[str]", required_keys)),
            *(cast("frozenset[str]", optional_keys)),
        ]
    # Otherwise, we just return all the keys.
    else:
        keys = list[str](self.__dict__.keys())

    config_as_dict = dict(self.__dict__.items())
    if argument_mapping is not None:
        for k, v in argument_mapping.items():
            if k in config_as_dict:
                config_as_dict[v] = config_as_dict[k]
                del config_as_dict[k]

    return cast(
        "ModelConfigTD | TDSpec",
        {
            k: v
            for k, v in config_as_dict.items()  # pyright: ignore[reportAny]
            if v not in not_given_sentinels and k in keys
        },
    )

from_dict classmethod

from_dict(config: ModelConfigTD) -> Self

Create a new instance from a dictionary.

PARAMETER DESCRIPTION
config

The configuration as a dictionary.

TYPE: ModelConfigTD

RETURNS DESCRIPTION
Self

The new instance.

Source code in conatus/models/config.py
@classmethod
def from_dict(cls, config: ModelConfigTD) -> Self:
    """Create a new instance from a dictionary.

    Args:
        config: The configuration as a dictionary.

    Returns:
        The new instance.
    """
    return cls(**config)

from_dict_instance_or_none classmethod

from_dict_instance_or_none(
    config: Self | ModelConfigTD | None,
) -> Self

Create a new instance from a dictionary or an instance.

PARAMETER DESCRIPTION
config

The configuration as a dictionary or an instance.

TYPE: Self | ModelConfigTD | None

RETURNS DESCRIPTION
Self

The new instance.

Source code in conatus/models/config.py
@classmethod
def from_dict_instance_or_none(
    cls, config: Self | ModelConfigTD | None
) -> Self:
    """Create a new instance from a dictionary or an instance.

    Args:
        config: The configuration as a dictionary or an instance.

    Returns:
        The new instance.
    """
    if config is None:
        return cls()
    if isinstance(config, Mapping):
        return cls.from_dict(config)
    return config

apply_config

apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: Literal[True]
) -> None
apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: Literal[False] = False
) -> Self
apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: bool = False
) -> Self | None

Copy the configuration and apply new values to it.

This ensures that you can create a hierarchy of configurations.

PARAMETER DESCRIPTION
new_config

The new configuration.

TYPE: Self | ModelConfigTD | None

inplace

Whether to update the instance in place, or return a new copy

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Self | None

None if the modification happens in place; otherwise, return a new instance with the modified configuration

Source code in conatus/models/config.py
def apply_config(
    self, new_config: Self | ModelConfigTD | None, *, inplace: bool = False
) -> Self | None:
    """Copy the configuration and apply new values to it.

    This ensures that you can create a hierarchy of configurations.

    Args:
        new_config: The new configuration.
        inplace: Whether to update the instance in place, or return a new
            copy

    Returns:
        None if the modification happens in place; otherwise, return a new
            instance with the modified configuration
    """
    if new_config is None:
        return self
    if isinstance(new_config, Mapping):
        new_config_as_dict = new_config
    else:
        new_config_as_dict = new_config.to_kwargs()
    if inplace:
        self.__dict__.update(new_config_as_dict)
        return None
    new_config_as_dict = self.to_kwargs() | new_config_as_dict
    return type(self).from_dict(new_config_as_dict)

conatus.models.open_ai.open_ai.OpenAIModelCCSpec

Bases: TypedDict

The arguments expected by the OpenAI Chat Completions API.

This is distinct from OpenAIModelConfig . These arguments are only the ones that are passed to the OpenAI client during the call method.

In other words, any other arguments in the OpenAIModelConfig , which can be meant for general configuration purposes (such as the API key), are not included here.

conatus.models.open_ai.open_ai.OpenAIModelResponseSpec

Bases: TypedDict

The arguments expected by the OpenAI Responses API.

This is distinct from OpenAIModelConfig . These arguments are only the ones that are passed to the OpenAI client during the call method.

In other words, any other arguments in the OpenAIModelConfig , which can be meant for general configuration purposes (such as the API key), are not included here.

ATTRIBUTE DESCRIPTION
previous_response_id

The ID of the previous response. The Responses API allows you to use the ID of a previous response to generate a new response. This is useful for things like follow-ups and continuations.

TYPE: str | NotGiven

max_output_tokens

The maximum number of tokens to use.

TYPE: int | NotGiven

truncation

The truncation setting to use.

TYPE: Literal['auto', 'disabled'] | NotGiven

previous_response_id

The ID of the previous response. This is used for stateful capabilities.

TYPE: str | NotGiven

instructions

The instructions to use. This is used to pass the system message to the API.

TYPE: str | NotGiven

reasoning

The reasoning parameters to use. This is used to pass the reasoning parameters to the API.

TYPE: Reasoning | NotGiven

Defaults

conatus.models.open_ai.open_ai.DEFAULT_OPENAI_MODEL_NAME module-attribute

DEFAULT_OPENAI_MODEL_NAME = 'gpt-4o'

The default model name.

This is not a stable API. At any given release, the default model name may change based on OpenAI's latest model releases. If you need to specify a model name, please do so in the config argument.

conatus.models.open_ai.open_ai.DEFAULT_OPENAI_MAX_TOKENS module-attribute

DEFAULT_OPENAI_MAX_TOKENS = 4096

The default maximum number of tokens.

conatus.models.open_ai.open_ai.DEFAULT_OPENAI_TEMPERATURE module-attribute

DEFAULT_OPENAI_TEMPERATURE = NOT_GIVEN

The default temperature value.

By default, we do not set a temperature value, and let the API figure out the right temperature.

conatus.models.open_ai.open_ai.DEFAULT_OPENAI_TIMEOUT module-attribute

DEFAULT_OPENAI_TIMEOUT = NOT_GIVEN

The default timeout value.

By default, we do not set a timeout value, and defer to the API's default.

conatus.models.open_ai.open_ai.DEFAULT_OPENAI_USE_MOCK module-attribute

DEFAULT_OPENAI_USE_MOCK = False

The default use mock value.

Mocks

Developer only

The following classes are mocks (e.g. simulated responses) for the OpenAI model. They are not meant to be used directly, but are useful for testing.

conatus.models.mocks.open_ai

Mocks for OpenAIModel.

MockChatCompletion

MockChatCompletion(content: str)

Bases: ChatCompletion

Mock OpenAI chat completion that can be used to test the OpenAI model.

The structure of that mock is very simple; it is supposed to be the simplest possible message that is retrieved from the OpenAI API.

At initialization, the user prompt is passed in and the mock is initialized with a single assistant message with the user prompt as the content.

PARAMETER DESCRIPTION
content

The content of the mock chat completion.

TYPE: str

Source code in conatus/models/mocks/open_ai.py
def __init__(self, content: str) -> None:
    """Initialize the mock chat completion.

    Args:
        content: The content of the mock chat completion.
    """
    unix_timestamp = int(time.time())
    choices = [
        Choice(
            finish_reason="stop",
            index=0,
            message=ChatCompletionMessage(
                content=content,
                role="assistant",
                tool_calls=[
                    ChatCompletionMessageToolCall(
                        id="mock_tool_call_id",
                        function=FunctionDuringModelResponse(
                            name="mock_function_name",
                            arguments="mock_arguments",
                        ),
                        type="function",
                    )
                ],
            ),
        )
    ]
    super().__init__(
        choices=choices,
        id="mock_id",
        model="mock_model",
        object="chat.completion",
        created=unix_timestamp,
        usage=OpenAICompletionUsage(
            prompt_tokens=101, completion_tokens=102, total_tokens=203
        ),
    )

MockResponse

MockResponse(content: str)

Bases: Response

Mock OpenAI response that can be used to test the OpenAI model.

The structure follows the expected structure of payloads sent by OpenAI's Responses API.

At initialization, the user prompt is passed in and the mock is initialized with a single assistant message with the user prompt as the content.

Source code in conatus/models/mocks/open_ai.py
def __init__(self, content: str) -> None:
    """Initialize the mock response."""
    super().__init__(
        id="mock_id",
        created_at=int(time.time()),
        model="mock_model",
        object="response",
        output=[
            ResponseOutputMessage(
                id="mock_id",
                content=[
                    ResponseOutputText(
                        annotations=[],
                        text=content,
                        type="output_text",
                    )
                ],
                role="assistant",
                status="completed",
                type="message",
            )
        ],
        parallel_tool_calls=True,
        tool_choice="auto",
        tools=[],
        usage=ResponseUsage(
            input_tokens=101,
            output_tokens=102,
            total_tokens=203,
            output_tokens_details=OutputTokensDetails(
                reasoning_tokens=101,
            ),
            input_tokens_details=InputTokensDetails(
                cached_tokens=101,
            ),
        ),
    )

create_mock_chat_completion_chunk_cc

create_mock_chat_completion_chunk_cc() -> (
    list[ChatCompletionChunk]
)

Create a mock chat completion chunk.

RETURNS DESCRIPTION
list[ChatCompletionChunk]

List of mock chat completion chunks.

Source code in conatus/models/mocks/open_ai.py
def create_mock_chat_completion_chunk_cc() -> list[ChatCompletionChunk]:
    """Create a mock chat completion chunk.

    Returns:
        List of mock chat completion chunks.
    """
    # This is a highly simplified mock of the OpenAI response stream.
    # In practice, it's much longer.
    return [
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content="Sure thing! \n Will first ",
                        function_call=None,
                        refusal=None,
                        role="assistant",
                        tool_calls=None,
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=(
                            "write a super long message that's just long enough"
                            " for testing purposes"
                        ),
                        function_call=None,
                        refusal=None,
                        role="assistant",
                        tool_calls=None,
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=" and then call a tool",
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=None,
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=".",
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=None,
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=None,
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=[
                            ChoiceDeltaToolCall(
                                index=0,
                                id="call_rz7VkEwkTXiaVBF0dF9pGdK8",
                                function=ChoiceDeltaToolCallFunction(
                                    arguments="", name="sum_of_two_numbers"
                                ),
                                type="function",
                            )
                        ],
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=None,
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=[
                            ChoiceDeltaToolCall(
                                index=0,
                                id=None,
                                function=ChoiceDeltaToolCallFunction(
                                    arguments='{"a":2222', name=None
                                ),
                                type=None,
                            )
                        ],
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=None,
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=[
                            ChoiceDeltaToolCall(
                                index=0,
                                id=None,
                                function=ChoiceDeltaToolCallFunction(
                                    arguments=',"b":3828', name=None
                                ),
                                type=None,
                            )
                        ],
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=None,
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=[
                            ChoiceDeltaToolCall(
                                index=0,
                                id=None,
                                function=ChoiceDeltaToolCallFunction(
                                    arguments="}", name=None
                                ),
                                type=None,
                            )
                        ],
                    ),
                    finish_reason=None,
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=None,
        ),
        ChatCompletionChunk(
            id="chatcmpl-AnUzv4eD7DR6eDgXjlJ900N9wy7K9",
            choices=[
                ChoiceChunk(
                    delta=ChoiceDelta(
                        content=None,
                        function_call=None,
                        refusal=None,
                        role=None,
                        tool_calls=None,
                    ),
                    finish_reason="tool_calls",
                    index=0,
                    logprobs=None,
                )
            ],
            created=1736360591,
            model="gpt-4o-2024-08-06",
            object="chat.completion.chunk",
            service_tier=None,
            system_fingerprint="fp_d28bcae782",
            usage=OpenAICompletionUsage(
                completion_tokens=22,
                prompt_tokens=351,
                total_tokens=373,
                completion_tokens_details=CompletionTokensDetails(
                    accepted_prediction_tokens=0,
                    audio_tokens=0,
                    reasoning_tokens=0,
                    rejected_prediction_tokens=0,
                ),
                prompt_tokens_details=PromptTokensDetails(
                    audio_tokens=0, cached_tokens=0
                ),
            ),
        ),
    ]

create_mock_response_stream_events

create_mock_response_stream_events() -> (
    list[ResponseStreamEvent]
)

Create a list of mock response stream events.

RETURNS DESCRIPTION
list[ResponseStreamEvent]

List of mock response stream events.

Source code in conatus/models/mocks/open_ai.py
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
def create_mock_response_stream_events() -> list[ResponseStreamEvent]:
    """Create a list of mock response stream events.

    Returns:
        List of mock response stream events.
    """
    first_tool_call_uid = "fc_67d23xx00001sxx_xx_4d"
    first_tool_call_call_id = "call_Fcs_Erx-i"
    first_tool_call_name = "get_weather"
    first_tool_call_arguments = (
        '{"location":"<<var:location>>","unit":"c","return":"weather_msg"}'
    )
    second_tool_call_uid = "fc_67d-23x-xx8-493-02as_xx_xx+_d"
    second_tool_call_call_id = "call_Fx_x-xc-1293=sss_ss-sxi"
    second_tool_call_name = "translate_to_language"
    second_tool_call_arguments = (
        '{"text":"<<var:weather_msg>>",'
        '"from_language":"en",'
        '"to_language":"es",'
        '"return":"translated_msg"}'
    )
    third_tool_call_uid = "fc_67d23b=b92840-1bbs=ssx-x=x=x=4d"
    third_tool_call_call_id = "call_F=c=sf0-we=r=xi"
    third_tool_call_name = "terminate"
    third_tool_call_arguments = (
        '{"success": True, "result": "<<var:translated_msg>>"}'
    )
    return [
        ResponseCreatedEvent(
            sequence_number=0,
            response=Response(
                id="resp_67d233de28748190a5bd09e5",
                created_at=1741829086.0,
                error=None,
                incomplete_details=None,
                instructions=None,
                metadata={},
                model="o1-2024-12-17",
                object="response",
                output=[],
                parallel_tool_calls=True,
                temperature=1.0,
                tool_choice="auto",
                tools=[
                    FunctionTool(
                        name="get_weather",
                        parameters={
                            "type": "object",
                            "properties": {
                                "location": {
                                    "type": "string",
                                    "description": (
                                        "The city and state e.g."
                                        " San Francisco, CA"
                                    ),
                                },
                                "unit": {"type": "string", "enum": ["c", "f"]},
                            },
                            "additionalProperties": False,
                            "required": ["location", "unit"],
                        },
                        strict=True,
                        type="function",
                        description="Determine weather in my location",
                    )
                ],
                top_p=1.0,
                max_output_tokens=10000,
                previous_response_id=None,
                reasoning=Reasoning(effort="medium", generate_summary=None),
                status="in_progress",
                text=ResponseTextConfig(format=ResponseFormatText(type="text")),
                truncation="disabled",
                usage=None,
                user=None,
            ),
            type="response.created",
        ),
        ResponseInProgressEvent(
            sequence_number=1,
            response=Response(
                id="resp_67d233de28748190a5bd09e5",
                created_at=1741829086.0,
                error=None,
                incomplete_details=None,
                instructions=None,
                metadata={},
                model="o1-2024-12-17",
                object="response",
                output=[],
                parallel_tool_calls=True,
                temperature=1.0,
                tool_choice="auto",
                tools=[
                    FunctionTool(
                        name="get_weather",
                        parameters={
                            "type": "object",
                            "properties": {
                                "location": {
                                    "type": "string",
                                    "description": (
                                        "The city and state"
                                        " e.g. San Francisco, CA"
                                    ),
                                },
                                "unit": {"type": "string", "enum": ["c", "f"]},
                            },
                            "additionalProperties": False,
                            "required": ["location", "unit"],
                        },
                        strict=True,
                        type="function",
                        description="Determine weather in my location",
                    )
                ],
                top_p=1.0,
                max_output_tokens=10000,
                previous_response_id=None,
                reasoning=Reasoning(effort="medium", generate_summary=None),
                status="in_progress",
                text=ResponseTextConfig(format=ResponseFormatText(type="text")),
                truncation="disabled",
                usage=None,
                user=None,
            ),
            type="response.in_progress",
        ),
        ResponseOutputItemAddedEvent(
            sequence_number=2,
            item=ResponseReasoningItem(
                id="rs_67d233e09e588190985186bfcdd1",
                summary=[],
                type="reasoning",
                status=None,
            ),
            output_index=0,
            type="response.output_item.added",
        ),
        ResponseReasoningSummaryPartAddedEvent(
            sequence_number=3,
            item_id="rs_67d233e09e588190985186bfcdd1",
            output_index=0,
            part=PartAdded(text="", type="summary_text"),
            summary_index=0,
            type="response.reasoning_summary_part.added",
        ),
        ResponseReasoningSummaryTextDeltaEvent(
            sequence_number=4,
            delta="**Explain",
            item_id="rs_67d233e09e588190985186bfcdd1",
            output_index=0,
            summary_index=0,
            type="response.reasoning_summary_text.delta",
        ),
        ResponseReasoningSummaryTextDeltaEvent(
            sequence_number=5,
            delta="ing ",
            item_id="rs_67d233e09e588190985186bfcdd1",
            output_index=0,
            summary_index=0,
            type="response.reasoning_summary_text.delta",
        ),
        ResponseReasoningSummaryTextDoneEvent(
            sequence_number=6,
            item_id="rs_67d233e09e588190985186bfcdd1",
            output_index=0,
            summary_index=0,
            text="**Explaining ",
            type="response.reasoning_summary_text.done",
        ),
        ResponseReasoningSummaryPartDoneEvent(
            sequence_number=7,
            item_id="rs_67d233e09e588190985186bfcdd1",
            output_index=0,
            part=PartDone(
                text="**Explaining ",
                type="summary_text",
            ),
            summary_index=0,
            type="response.reasoning_summary_part.done",
        ),
        ResponseOutputItemDoneEvent(
            sequence_number=8,
            item=ResponseReasoningItem(
                id="rs_67d233e09e588190985186bfcdd1",
                summary=[],
                type="reasoning",
                status=None,
            ),
            output_index=0,
            type="response.output_item.done",
        ),
        ResponseOutputItemAddedEvent(
            sequence_number=9,
            item=ResponseOutputMessage(
                id="msg_67d24d8abb1c8190bb67d4b",
                content=[],
                role="assistant",
                status="in_progress",
                type="message",
            ),
            output_index=1,
            type="response.output_item.added",
        ),
        ResponseContentPartAddedEvent(
            sequence_number=10,
            content_index=0,
            item_id="msg_67d24d8abb1c8190bb67d4b",
            output_index=1,
            part=ResponseOutputText(
                annotations=[], text="", type="output_text"
            ),
            type="response.content_part.added",
        ),
        ResponseTextDeltaEvent(
            sequence_number=11,
            content_index=0,
            delta="I'll",
            item_id="msg_67d24d8abb1c8190bb67d4b",
            output_index=1,
            type="response.output_text.delta",
        ),
        ResponseTextDeltaEvent(
            sequence_number=12,
            content_index=0,
            delta=" check the current weather for you in Panama.",
            item_id="msg_67d24d8abb1c8190bb67d4b",
            output_index=1,
            type="response.output_text.delta",
        ),
        ResponseTextDoneEvent(
            sequence_number=13,
            content_index=0,
            item_id="msg_67d24d8abb1c8190bb67d4b",
            output_index=1,
            text="I'll check the current weather for you in Panama.",
            type="response.output_text.done",
        ),
        ResponseContentPartDoneEvent(
            sequence_number=14,
            content_index=0,
            item_id="msg_67d24d8abb1c8190bb67d4b",
            output_index=1,
            part=ResponseOutputText(
                annotations=[],
                text="I'll check the current weather for you in Panama.",
                type="output_text",
            ),
            type="response.content_part.done",
        ),
        ResponseOutputItemDoneEvent(
            sequence_number=15,
            item=ResponseOutputMessage(
                id="msg_67d24d8abb1c8190bb67d4b",
                content=[
                    ResponseOutputText(
                        annotations=[],
                        text=(
                            "I'll check the current weather for you in Panama."
                        ),
                        type="output_text",
                    )
                ],
                role="assistant",
                status="completed",
                type="message",
            ),
            output_index=1,
            type="response.output_item.done",
        ),
        ResponseOutputItemAddedEvent(
            sequence_number=16,
            item=ResponseFunctionToolCall(
                id=first_tool_call_uid,
                arguments=first_tool_call_arguments,
                call_id=first_tool_call_call_id,
                name=first_tool_call_name,
                type="function_call",
                status="in_progress",
            ),
            output_index=2,
            type="response.output_item.added",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=17,
            delta=first_tool_call_arguments[:20],
            item_id=first_tool_call_uid,
            output_index=2,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=18,
            delta=first_tool_call_arguments[20:],
            item_id=first_tool_call_uid,
            output_index=2,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDoneEvent(
            sequence_number=19,
            arguments=first_tool_call_arguments,
            item_id=first_tool_call_uid,
            output_index=2,
            type="response.function_call_arguments.done",
        ),
        ResponseOutputItemDoneEvent(
            sequence_number=20,
            item=ResponseFunctionToolCall(
                id=first_tool_call_uid,
                arguments=first_tool_call_arguments,
                call_id=first_tool_call_call_id,
                name=first_tool_call_name,
                type="function_call",
                status="completed",
            ),
            output_index=2,
            type="response.output_item.done",
        ),
        ResponseOutputItemAddedEvent(
            sequence_number=21,
            item=ResponseFunctionToolCall(
                id=second_tool_call_uid,
                arguments=second_tool_call_arguments,
                call_id=second_tool_call_call_id,
                name=second_tool_call_name,
                type="function_call",
                status="in_progress",
            ),
            output_index=3,
            type="response.output_item.added",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=22,
            delta=second_tool_call_arguments[:20],
            item_id=second_tool_call_uid,
            output_index=3,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=23,
            delta=second_tool_call_arguments[20:],
            item_id=second_tool_call_uid,
            output_index=3,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDoneEvent(
            sequence_number=24,
            arguments=second_tool_call_arguments,
            item_id=second_tool_call_uid,
            output_index=3,
            type="response.function_call_arguments.done",
        ),
        ResponseOutputItemDoneEvent(
            sequence_number=25,
            item=ResponseFunctionToolCall(
                id=second_tool_call_uid,
                arguments=second_tool_call_arguments,
                call_id=second_tool_call_call_id,
                name=second_tool_call_name,
                type="function_call",
                status="completed",
            ),
            output_index=3,
            type="response.output_item.done",
        ),
        ResponseOutputItemAddedEvent(
            sequence_number=26,
            item=ResponseFunctionToolCall(
                id=third_tool_call_uid,
                arguments=third_tool_call_arguments,
                call_id=third_tool_call_call_id,
                name=third_tool_call_name,
                type="function_call",
                status="in_progress",
            ),
            output_index=4,
            type="response.output_item.added",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=27,
            delta=third_tool_call_arguments[:20],
            item_id=third_tool_call_uid,
            output_index=4,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDeltaEvent(
            sequence_number=28,
            delta=third_tool_call_arguments[20:],
            item_id=third_tool_call_uid,
            output_index=4,
            type="response.function_call_arguments.delta",
        ),
        ResponseFunctionCallArgumentsDoneEvent(
            sequence_number=29,
            arguments=third_tool_call_arguments,
            item_id=third_tool_call_uid,
            output_index=4,
            type="response.function_call_arguments.done",
        ),
        ResponseOutputItemDoneEvent(
            sequence_number=30,
            item=ResponseFunctionToolCall(
                id=third_tool_call_uid,
                arguments=third_tool_call_arguments,
                call_id=third_tool_call_call_id,
                name=third_tool_call_name,
                type="function_call",
                status="completed",
            ),
            output_index=4,
            type="response.output_item.done",
        ),
        ResponseCompletedEvent(
            sequence_number=31,
            response=Response(
                id="resp_67d233de28748190a5bd09e5",
                created_at=1741829000.0,
                error=None,
                incomplete_details=None,
                instructions=None,
                metadata={},
                model="o1-2024-12-17",
                object="response",
                output=[
                    ResponseReasoningItem(
                        id="rs_67d233e09e588190985186bfcdd1",
                        summary=[],
                        type="reasoning",
                        status=None,
                    ),
                    ResponseOutputMessage(
                        id="msg_67d24d8abb1c8190bb67d4b",
                        content=[
                            ResponseOutputText(
                                annotations=[],
                                text=(
                                    "I'll check the current weather for you in "
                                    "Panama."
                                ),
                                type="output_text",
                            )
                        ],
                        role="assistant",
                        status="completed",
                        type="message",
                    ),
                    ResponseFunctionToolCall(
                        id=first_tool_call_uid,
                        arguments=first_tool_call_arguments,
                        call_id=first_tool_call_call_id,
                        name=first_tool_call_name,
                        type="function_call",
                        status="completed",
                    ),
                    ResponseFunctionToolCall(
                        id=second_tool_call_uid,
                        arguments=second_tool_call_arguments,
                        call_id=second_tool_call_call_id,
                        name=second_tool_call_name,
                        type="function_call",
                    ),
                    ResponseFunctionToolCall(
                        id=third_tool_call_uid,
                        arguments=third_tool_call_arguments,
                        call_id=third_tool_call_call_id,
                        name=third_tool_call_name,
                        type="function_call",
                    ),
                ],
                parallel_tool_calls=True,
                temperature=1.0,
                tool_choice="auto",
                tools=[
                    FunctionTool(
                        name="get_weather",
                        parameters={
                            "type": "object",
                            "properties": {
                                "location": {
                                    "type": "string",
                                    "description": (
                                        "The city and state e.g. "
                                        "San Francisco, CA"
                                    ),
                                },
                                "unit": {"type": "string", "enum": ["c", "f"]},
                            },
                            "additionalProperties": False,
                            "required": ["location", "unit"],
                        },
                        strict=True,
                        type="function",
                        description="Determine weather in my location",
                    )
                ],
                top_p=1.0,
                max_output_tokens=10000,
                previous_response_id=None,
                reasoning=Reasoning(effort="medium", generate_summary=None),
                status="completed",
                text=ResponseTextConfig(format=ResponseFormatText(type="text")),
                truncation="disabled",
                usage=ResponseUsage(
                    input_tokens=140,
                    output_tokens=215,
                    output_tokens_details=OutputTokensDetails(
                        reasoning_tokens=192
                    ),
                    total_tokens=355,
                    input_tokens_details=InputTokensDetails(
                        cached_tokens=101,
                    ),
                ),
                user=None,
            ),
            type="response.completed",
        ),
    ]

create_mock_stream_cc async

create_mock_stream_cc(
    chunks: list[ChatCompletionChunk], delay: float = 0.1
) -> AsyncIterator[ChatCompletionChunk]

Create a mock stream of chat completion chunks.

PARAMETER DESCRIPTION
chunks

List of chunks to stream (typically ChatCompletionChunk objects)

TYPE: list[ChatCompletionChunk]

delay

Delay between chunks in seconds (default: 0.1)

TYPE: float DEFAULT: 0.1

YIELDS DESCRIPTION
AsyncIterator[ChatCompletionChunk]

AsyncIterator yielding the chunks with specified delay

Source code in conatus/models/mocks/open_ai.py
async def create_mock_stream_cc(
    chunks: list[ChatCompletionChunk], delay: float = 0.1
) -> AsyncIterator[ChatCompletionChunk]:
    """Create a mock stream of chat completion chunks.

    Args:
        chunks: List of chunks to stream (typically ChatCompletionChunk objects)
        delay: Delay between chunks in seconds (default: 0.1)

    Yields:
        AsyncIterator yielding the chunks with specified delay
    """
    for chunk in chunks:
        await asyncio.sleep(delay)
        yield chunk

create_mock_stream_response async

create_mock_stream_response(
    chunks: list[ResponseStreamEvent], delay: float = 0.1
) -> AsyncIterator[ResponseStreamEvent]

Create a mock stream of response events.

PARAMETER DESCRIPTION
chunks

List of response events to stream

TYPE: list[ResponseStreamEvent]

delay

Delay between chunks in seconds (default: 0.1)

TYPE: float DEFAULT: 0.1

YIELDS DESCRIPTION
AsyncIterator[ResponseStreamEvent]

AsyncIterator yielding the chunks with specified delay

Source code in conatus/models/mocks/open_ai.py
async def create_mock_stream_response(
    chunks: list[ResponseStreamEvent], delay: float = 0.1
) -> AsyncIterator[ResponseStreamEvent]:
    """Create a mock stream of response events.

    Args:
        chunks: List of response events to stream
        delay: Delay between chunks in seconds (default: 0.1)

    Yields:
        AsyncIterator yielding the chunks with specified delay
    """
    for chunk in chunks:
        await asyncio.sleep(delay)
        yield chunk

mock_stream_cc async

mock_stream_cc(
    delay: float = 0,
    chunks: list[ChatCompletionChunk] | None = None,
) -> AsyncIterator[ChatCompletionChunk]

Fake stream of OpenAI Chat Completion chunks.

PARAMETER DESCRIPTION
delay

Delay between chunks in seconds (default: 0)

TYPE: float DEFAULT: 0

chunks

List of chunks to stream. If not provided, the default mock chat completion chunks are used.

TYPE: list[ChatCompletionChunk] | None DEFAULT: None

RETURNS DESCRIPTION
AsyncIterator[ChatCompletionChunk]

AsyncIterator yielding the chunks with specified delay

Source code in conatus/models/mocks/open_ai.py
async def mock_stream_cc(
    delay: float = 0,
    chunks: list[ChatCompletionChunk] | None = None,
) -> AsyncIterator[ChatCompletionChunk]:
    """Fake stream of OpenAI Chat Completion chunks.

    Args:
        delay: Delay between chunks in seconds (default: 0)
        chunks: List of chunks to stream. If not provided, the default mock
            chat completion chunks are used.

    Returns:
        AsyncIterator yielding the chunks with specified delay
    """
    return create_mock_stream_cc(
        chunks or create_mock_chat_completion_chunk_cc(), delay
    )

mock_stream_response async

mock_stream_response(
    delay: float = 0,
    chunks: list[ResponseStreamEvent] | None = None,
) -> AsyncIterator[ResponseStreamEvent]

Fake stream of OpenAI response events.

PARAMETER DESCRIPTION
delay

Delay between chunks in seconds (default: 0)

TYPE: float DEFAULT: 0

chunks

List of response events to stream. If not provided, the default mock response stream events are used.

TYPE: list[ResponseStreamEvent] | None DEFAULT: None

RETURNS DESCRIPTION
AsyncIterator[ResponseStreamEvent]

AsyncIterator yielding the chunks with specified delay

Source code in conatus/models/mocks/open_ai.py
async def mock_stream_response(
    delay: float = 0,
    chunks: list[ResponseStreamEvent] | None = None,
) -> AsyncIterator[ResponseStreamEvent]:
    """Fake stream of OpenAI response events.

    Args:
        delay: Delay between chunks in seconds (default: 0)
        chunks: List of response events to stream. If not provided, the default
            mock response stream events are used.

    Returns:
        AsyncIterator yielding the chunks with specified delay
    """
    return create_mock_stream_response(
        chunks or create_mock_response_stream_events(), delay
    )

mock_cc async

mock_cc(
    content: str | None = None,
    message: ChatCompletion | None = None,
) -> ChatCompletion

Mock OpenAI chat completion.

By default, you get a mock chat completion with a simple message. If you want to get a message with a specific content, you can pass a content argument. You can also pass a message argument to get a message with a specific structure.

PARAMETER DESCRIPTION
content

The content of the mock chat completion.

TYPE: str | None DEFAULT: None

message

The message to return.

TYPE: ChatCompletion | None DEFAULT: None

RETURNS DESCRIPTION
ChatCompletion

The mock chat completion.

Source code in conatus/models/mocks/open_ai.py
async def mock_cc(
    content: str | None = None, message: ChatCompletion | None = None
) -> ChatCompletion:
    """Mock OpenAI chat completion.

    By default, you get a mock chat completion with a simple message. If you
    want to get a message with a specific content, you can pass a `content`
    argument. You can also pass a `message` argument to get a message with a
    specific structure.

    Args:
        content: The content of the mock chat completion.
        message: The message to return.

    Returns:
        The mock chat completion.
    """
    if message is not None:  # pragma: no branch
        return message  # pragma: no cover
    if content is None:  # pragma: no branch
        content = "Hello, world!"  # pragma: no cover
    return MockChatCompletion(content)

mock_response async

mock_response(
    content: str | None = None,
    message: Response | None = None,
) -> Response

Mock OpenAI response.

By default, you get a mock response with a simple message. If you want to get a message with a specific content, you can pass a content argument. You can also pass a message argument to get a message with a specific structure.

PARAMETER DESCRIPTION
content

The content of the mock response.

TYPE: str | None DEFAULT: None

message

The message to return.

TYPE: Response | None DEFAULT: None

RETURNS DESCRIPTION
Response

The mock response.

Source code in conatus/models/mocks/open_ai.py
async def mock_response(
    content: str | None = None, message: Response | None = None
) -> Response:
    """Mock OpenAI response.

    By default, you get a mock response with a simple message. If you want to
    get a message with a specific content, you can pass a `content` argument.
    You can also pass a `message` argument to get a message with a specific
    structure.

    Args:
        content: The content of the mock response.
        message: The message to return.

    Returns:
        The mock response.
    """
    if message is not None:  # pragma: no branch
        return message  # pragma: no cover
    if content is None:  # pragma: no branch
        content = "Hello, world!"  # pragma: no cover
    return MockResponse(content)

  1. No, the Assistants API does not count 😇.