Skip to content

Base AI Model

For developers only

This page is addressed to developers who want to extend the BaseAIModel class.

End users should use classes such as OpenAIModel .

More information in the 'How-to' section

We also have a How-to section on adding a new AI provider , which provides a step-by-step guide on how to add a new AI provider to Conatus.

The BaseAIModel class is an abstract class that defines the interface for all AI models. It is meant to be a wrapper around an AI model's client and its call function.

Note

Here, "call function" means the method, generally provided by the AI provider's SDK, that generates a response from a prompt. The actual name of that function might vary from provider to provider ("create", "generate", etc.)

One instantiation per model configuration

There should be one instantiation of the class per model configuration. In multi-agent scenarios, you might have multiple instantiations of the class, but each one should be for a different model configuration.

Therefore, you should design your classes so that instantiating a new model with a given configuration is really quick. For example, it's probably a bad idea to create a new client for every instance of the class, since it generally entails some network calls.

We provide the convenience method with_config to help you create a new instance of the class with a given configuration, so that you don't have to set up the client again. Once again, this should be useful in multi-agent system.

Initialization

The __init__ method is already implemented, and offers a few features:

  • It calls ModelConfig.apply_config , which automatically reconciles the configuration passed by the user with the default configuration (see below), which is obtained by calling the default_config method.
  • It initializes the client, either by using the client passed by the user, or by calling the default_client method. That method, which needs to be implemented by subclasses, is responsible for creating the client. If your subclass implements the api_key_env_variable class attribute, the default __init__ method will use the get_api_key method to retrieve the API key from the environment variables.

Configuration

We expect the config attribute of the class to be of type ModelConfig .

Creating a custom configuration

You should create a subclass of ModelConfig to support your specific needs (e.g. adding new arguments if your AI provider supports specific arguments in their API.) This is one of the reasons why BaseAIModel is a generic class; this way, you can be sure that if you add configuration arguments, you can retrieve them within the class without issues.

For instance, OpenAIModelConfig has a timeout argument, which is not present in ModelConfig.

Configuring an AI model from a user perspective

Here's how the user should handle configuration values:

  • The configuration might be passed either at __init__ or at the call method. In either case, you must try to honor its values. If they differ, the values passed at the call method will override the ones passed at the __init__ method.
  • Users should be able to pass a dictionary to the call method, and the values will be used to override the ones in the configuration. In fact, for the call method, the user should generally pass a dictionary. If they don't do this, any keyword argument that is not explicitly passed will be reset to a default value, which might not be the intended effect.

We provide a convenience method ModelConfig.apply_config to help you reconcile the default configuration with the user-provided configuration.

Passing the configuration to the AI model

One thing you want to do with ModelConfig is to pass the values directly to the AI model through its calling method. For this, you can use the to_kwargs method, which returns a dictionary with the configuration, and can be then passed as keyword arguments in your call method.

Nevertheless, some attributes in ModelConfig are not meant to be passed in the call method. For instance:

  • The api_key attribute is not meant to be passed in the call method, because it's meant to be set in the client.
  • The stdout_mode attribute is not meant to be passed in the call method, because it's meant to be set in the printing mix-in.

In order to distinguish between the two, you can define a TypedDict that matches the arguments expected by the provider in the call method, and then use it as the specification in the to_kwargs method.

One example of this is OpenAIModelCCSpec , which ensures that only the arguments expected by the OpenAI API are passed in the call method.

Mocks

You can set the use_mock attribute to True to use a mock client. We try always to set it to True during testing. One mechanism to do this is to set the ALWAYS_USE_MOCK environment variable to True. You should try to honor this.

The call methods

Any subclass needs to implement four methods that wrap around the underlying call function of the AI model. These are:

All of these functions need to return an AIResponse object.

The simple_call method is a convenience method for the call method.

Converting messages and tools to the AI model format

The BaseAIModel class provides two methods to convert the messages and tools to the AI model format:

These methods are meant to be overridden by subclasses.

In turn, these methods are leveraged by the prepare_call_args method. This method can be used as is, as long as you override the methods above.

conatus.models.base.BaseAIModel

BaseAIModel(
    model_config: ModelConfig | ModelConfigTD | None = None,
    client: ClientType = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None,
    **kwargs: ParamType
)

Bases: ABC

Base abstract class for all AI models.

This method can be called by subclasses, but it is not mandatory.

PARAMETER DESCRIPTION
model_config

The configuration for the AI model. This can be either a ModelConfig object, or a dictionary. It will be reconciled with the default configuration, with values from the user-provided config taking precedence.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

client

The client to use. If not provided, will be created using default_client().

TYPE: ClientType DEFAULT: None

get_default_client_if_not_given

Whether to initialize the default client if not provided. If you want to implement your own client setup logic, you should always set this to False.

TYPE: bool DEFAULT: True

api_key

The API key to use. If not provided, it will be read from the self.api_key_env_variable environment variable. Note that if you provide a client, your API key will be discarded.

TYPE: str | None DEFAULT: None

model_name

The name of the model to use. If provided, overrides any model name in the config.

TYPE: str | None DEFAULT: None

model_type

The type of model to use. This is used to determine the model name only if model_name is not provided. If provided, overrides any model type in the config.

TYPE: ModelType | None DEFAULT: None

**kwargs

Additional keyword arguments passed to default_client() when creating a new client.

TYPE: ParamType DEFAULT: {}

Source code in conatus/models/base.py
def __init__(
    self,
    model_config: ModelConfig | ModelConfigTD | None = None,
    client: ClientType = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None,
    **kwargs: ParamType,
) -> None:
    """Initialize the AI model.

    This method can be called by subclasses, but it is not mandatory.

    Args:
        model_config: The configuration for the AI model. This can be
            either a [`ModelConfig`][conatus.models.base.ModelConfig]
            object, or a dictionary. It will be reconciled with the
            default configuration, with values from the user-provided
            config taking precedence.
        client: The client to use. If not provided, will be created using
            default_client().
        get_default_client_if_not_given: Whether to initialize
            the default client if not provided. If you want to implement
            your own client setup logic, you should always set this to
            `False`.
        api_key: The API key to use. If not provided, it will be read from
            the [`self.api_key_env_variable`
            ][conatus.models.base.BaseAIModel.api_key_env_variable]
            environment variable. Note that if you provide a client, your
            API key will be discarded.
        model_name: The name of the model to use. If provided, overrides
            any model name in the config.
        model_type: The type of model to use. This is used to determine the
            model name only if `model_name` is not provided. If provided,
            overrides any model type in the config.
        **kwargs: Additional keyword arguments passed to default_client()
            when creating a new client.
    """
    # In case the subclass does not define a model_config_cls, we use
    # the default ModelConfig class.
    if getattr(self, "model_config_cls", None) is None:
        self.model_config_cls = ModelConfig

    self.model_config = self.default_config().apply_config(model_config)

    if model_name is not None:
        self.model_config.model_name = model_name
    elif model_type is not None:
        maybe_default_name = self.default_model_name(model_type)
        if maybe_default_name is not None:
            self.model_config.model_name = maybe_default_name.split(":")[1]
    if api_key is not None:
        self.model_config.api_key = api_key
    if (
        os.environ.get("ALWAYS_USE_MOCK", "false").lower() == "true"
        or self.model_config.use_mock
    ):
        self.model_config.use_mock = True
        logger.info("Using mock client for %s", self.__class__.__name__)
        self.client = None
        return

    if get_default_client_if_not_given:
        self.client = client or self.default_client(
            model_config=self.model_config,
            api_key=self.model_config.api_key or self.get_api_key(),
            **kwargs,
        )
    else:
        self.client = client

model_config_cls instance-attribute

model_config_cls: type[ModelConfig]

The class of the model configuration.

client instance-attribute

client: ClientType

The client for the AI model.

Don't be fooled by ClientType, the client can be of any type.

api_key_env_variable instance-attribute

api_key_env_variable: str

The environment variable that contains the API key.

provider instance-attribute

provider: ProviderName

The provider of the AI model.

model_config instance-attribute

model_config: ModelConfig = apply_config(model_config)

The configuration for the AI model.

config property

config: ModelConfig

The configuration for the model.

This is a convenience property for the model_config attribute.

with_config

with_config(
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False
) -> Self

Return a new instance of the model with the given configuration.

This is useful for quickly creating a new model without having to instantiate a new client.

from conatus.models import OpenAIModel
from conatus.models.config import ModelConfig

model = OpenAIModel()

model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

# Note that this also works if you pass a dictionary.
model_with_config = model.with_config({"model_name": "gpt-4o"})

assert model_with_config.config.model_name == "gpt-4o"
assert model_with_config.client == model.client
PARAMETER DESCRIPTION
model_config

The configuration for the new model.

TYPE: ModelConfig | ModelConfigTD | None

ignore_current_config

Whether to ignore the current configuration. If True, the new configuration will replace the current configuration. If False, the new configuration will be merged with the current configuration.

TYPE: bool DEFAULT: False

inplace

Whether to modify the current instance in place. If True, the current instance will be modified in place. If False, a new instance will be returned.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Self

A new instance of the model with the given configuration.

Source code in conatus/models/base.py
def with_config(
    self,
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False,
) -> Self:
    """Return a new instance of the model with the given configuration.

    This is useful for quickly creating a new model without having to
    instantiate a new client.

    ```python
    from conatus.models import OpenAIModel
    from conatus.models.config import ModelConfig

    model = OpenAIModel()

    model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

    # Note that this also works if you pass a dictionary.
    model_with_config = model.with_config({"model_name": "gpt-4o"})

    assert model_with_config.config.model_name == "gpt-4o"
    assert model_with_config.client == model.client
    ```

    Args:
        model_config: The configuration for the new model.
        ignore_current_config: Whether to ignore the current configuration.
            If `True`, the new configuration will replace the current
            configuration. If `False`, the new configuration will be merged
            with the current configuration.
        inplace: Whether to modify the current instance in place. If `True`,
            the current instance will be modified in place. If `False`, a
            new instance will be returned.

    Returns:
        A new instance of the model with the given configuration.
    """
    new_config = (
        self.model_config.apply_config(
            new_config=model_config,
            inplace=False,
        )
        if not ignore_current_config
        else type(self.model_config).from_dict_instance_or_none(
            model_config
        )
    )
    if inplace:
        self.model_config = new_config
        return self
    return type(self)(
        model_config=new_config,
        client=self.client,
    )

default_config

default_config() -> ModelConfig

Return the default configuration for the model.

This method is meant to be overridden by subclasses, but it is not mandatory; this is just a useful function for __init__.

This is also an alternative to hard-coding the configuration in the class definition, which is generally frowned upon because a dictionary is a mutable object.

Source code in conatus/models/base.py
def default_config(self) -> ModelConfig:  # pragma: no cover
    """Return the default configuration for the model.

    This method is meant to be overridden by subclasses, but it is not
    mandatory; this is just a useful function for `__init__`.

    This is also an alternative to hard-coding the configuration in the
    class definition, which is generally frowned upon because a dictionary
    is a mutable object.
    """
    # We keep 'self' in this method to make sure that subclasses
    # can use it if needed.
    _ = self
    return self.model_config_cls()

default_model_name classmethod

default_model_name(
    model_type: ModelType | None,
) -> str | None

Get the default model name for the model.

By default, this method returns None, which means that the model will use the default model name for the provider. If you want to customize behavior (e.g. so that the default model name is different for different types of models), you can override this method.

PARAMETER DESCRIPTION
model_type

The type of model to use.

TYPE: ModelType | None

RETURNS DESCRIPTION
str | None

The default model name for the model.

Source code in conatus/models/base.py
@classmethod
def default_model_name(cls, model_type: ModelType | None) -> str | None:
    """Get the default model name for the model.

    By default, this method returns `None`, which means that the model
    will use the default model name for the provider. If you want to
    customize behavior (e.g. so that the default model name is different
    for different types of models), you can override this method.

    Args:
        model_type: The type of model to use.

    Returns:
        The default model name for the model.
    """
    _ = model_type  # pragma: no cover
    return None  # pragma: no cover

get_api_key

get_api_key() -> str

Get the API key for the model.

This function should be implemented to retrieve environment variables.

RETURNS DESCRIPTION
str

The API key.

RAISES DESCRIPTION
AIModelAPIKeyMissingError

If the API key is not found in the environment variables.

ValueError

If the API key is not set in the class attribute.

Source code in conatus/models/base.py
def get_api_key(self) -> str:
    """Get the API key for the model.

    This function should be implemented to retrieve environment variables.

    Returns:
        The API key.

    Raises:
        AIModelAPIKeyMissingError: If the API key is not found in the
            environment variables.
        ValueError: If the API key is not set in the class attribute.
    """
    do_load_dotenv = (
        os.environ.get("TEST_DO_NOT_LOAD_DOTENV", "false").lower() != "true"
    )
    if do_load_dotenv and (
        "PYTEST_CURRENT_TEST" not in os.environ
    ):  # pragma: no branch
        _ = load_dotenv()  # pragma: no cover
    if getattr(self, "api_key_env_variable", None) is None:
        msg = (
            "You need to set the `api_key_env_variable` class attribute "
            "in the subclass.\n"
        )
        raise ValueError(msg)
    if self.api_key_env_variable not in os.environ:
        msg = (
            f"You need to set the {self.api_key_env_variable} "
            "environment variable.\n"
        )
        raise AIModelAPIKeyMissingError(msg)

    return os.environ[self.api_key_env_variable]

respawn_client

respawn_client() -> None

Respawn the client.

This method is used to respawn the client. It is mostly used so that we can refresh the client, which might be associated with an incompatible event loop.

Source code in conatus/models/base.py
def respawn_client(self) -> None:
    """Respawn the client.

    This method is used to respawn the client. It is mostly used so that
    we can refresh the client, which might be associated with an
    incompatible event loop.
    """
    with contextlib.suppress(RuntimeError):
        del self.client
    # We only cover this part in testing
    if (
        os.environ.get("ALWAYS_USE_MOCK", "false").lower() == "true"
        or self.model_config.use_mock
    ):  # pragma: no branch
        self.model_config.use_mock = True
        logger.info("Using mock client for %s", self.__class__.__name__)
        self.client = None
        return
    self.client = self.default_client(  # pragma: no cover
        model_config=self.model_config,
        api_key=self.model_config.api_key or self.get_api_key(),
    )

default_client abstractmethod

default_client(
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType
) -> ParamType

Return the default client for the model.

This is a convenience method used by __init__ if the user does not provide a client.

PARAMETER DESCRIPTION
model_config

The configuration for the model.

TYPE: ModelConfig

api_key

The API key for the model.

TYPE: str | None

**kwargs

Additional keyword arguments.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
ParamType

The default client for the model.

Source code in conatus/models/base.py
@abstractmethod
def default_client(
    self,
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType,
) -> ParamType:  # pragma: no cover
    """Return the default client for the model.

    This is a convenience method used by `__init__` if the user does not
    provide a client.

    Args:
        model_config: The configuration for the model.
        api_key: The API key for the model.
        **kwargs: Additional keyword arguments.

    Returns:
        The default client for the model.
    """
    raise NotImplementedError

call abstractmethod

call(
    prompt: AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse
call(
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]
call(
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Standardized call to the AI model.

The input is a AIPrompt object, and the output is a AIResponse object. We use these standardized inputs and outputs to make it easier to swap between AI models.

If you want to implement this method, you should read carefully the AIPrompt and AIResponse classes, because you will have to create the mapping between the custom data structures of the AI provider to the standardized data structures.

PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional keyword arguments.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the AI model.

Source code in conatus/models/base.py
@abstractmethod
def call(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Standardized call to the AI model.

    The input is a [`AIPrompt`
    ][conatus.models.inputs_outputs.prompt.AIPrompt] object,
    and the output is a [`AIResponse`
    ][conatus.models.inputs_outputs.response.AIResponse] object. We use
    these standardized inputs and outputs to make it easier to swap
    between AI models.

    If you want to implement this method, you should read carefully the
    [`AIPrompt`][conatus.models.inputs_outputs.prompt.AIPrompt] and
    [`AIResponse`][conatus.models.inputs_outputs.response.AIResponse]
    classes, because you will have to create the mapping between the custom
    data structures of the AI provider to the standardized data structures.

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model. Passing a
            dictionary is recommended, so that users don't unintentionally
            re-establish default values.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string. Note that this callback is called for each chunk of the
            response, and figures it out on the backend.
        **kwargs: Additional keyword arguments.

    Returns:
        The response from the AI model.
    """
    raise NotImplementedError  # pragma: no cover

acall abstractmethod async

acall(
    prompt: AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse
acall(
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]
acall(
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType] | AIResponse

Call the AI model asynchronously.

See call for more details.

PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: AIPrompt[OutputSchemaType] | AIPrompt

model_config

The configuration for the AI model.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional keyword arguments.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType] | AIResponse

The response from the AI model.

Source code in conatus/models/base.py
@abstractmethod
async def acall(
    self,
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType] | AIResponse:  # pragma: no cover
    """Call the AI model asynchronously.

    See [`call`][conatus.models.base.BaseAIModel.call] for more details.

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback: A callback for debugging purposes.
            This callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string.
        **kwargs: Additional keyword arguments.

    Returns:
        The response from the AI model.
    """
    raise NotImplementedError

call_stream abstractmethod

call_stream(
    prompt: AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse
call_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]
call_stream(
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType] | AIResponse

Call the AI model and stream the response.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: AIPrompt[OutputSchemaType] | AIPrompt

model_config

The configuration for the AI model.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional keyword arguments.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType] | AIResponse

The response from the AI model.

Source code in conatus/models/base.py
@abstractmethod
def call_stream(
    self,
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType] | AIResponse:
    """Call the AI model and stream the response.

    This method is meant to be overridden by subclasses.

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string. Note that this callback is called for each chunk of the
            response, and figures it out on the backend.
        **kwargs: Additional keyword arguments.

    Returns:
        The response from the AI model.
    """
    raise NotImplementedError  # pragma: no cover

acall_stream abstractmethod async

acall_stream(
    prompt: AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse
acall_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]
acall_stream(
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType] | AIResponse

Call the AI model and stream the response.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: AIPrompt[OutputSchemaType] | AIPrompt

model_config

The configuration for the AI model.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional keyword arguments.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType] | AIResponse

The response from the AI model.

Source code in conatus/models/base.py
@abstractmethod
async def acall_stream(
    self,
    prompt: AIPrompt[OutputSchemaType] | AIPrompt,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType] | AIResponse:  # pragma: no cover
    """Call the AI model and stream the response.

    This method is meant to be overridden by subclasses.

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string. Note that this callback is called for each chunk of the
            response, and figures it out on the backend.
        **kwargs: Additional keyword arguments.

    Returns:
        The response from the AI model.
    """
    raise NotImplementedError  # pragma: no cover

prepare_call_args

prepare_call_args(
    prompt: AIPrompt,
    user_provided_config: (
        ModelConfig | ModelConfigTD | None
    ) = None,
    *,
    computer_use_mode: bool = False,
    previous_messages_id: str | None = None
) -> AIModelCallArgs

Prepare the arguments for the call to the AI model.

This method can be used as is, as long as you override the methods convert_messages_to_ai_model_format and convert_tools_to_ai_model_format .

If the user provides a dictionary, we ensure that the values of self.config are always honored, and that only the values provided by the user are used.

PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: AIPrompt

user_provided_config

The configuration for the AI model.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

computer_use_mode

Whether to use the computer use mode.

TYPE: bool DEFAULT: False

previous_messages_id

The ID of the last response. If None, we will not pass the previous_messages_id argument to the AI model, and we will pass both new and previous messages to the model. If not None, we will pass only the new messages to the model, and we will use the previous_messages_id to identify the previous messages.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
AIModelCallArgs

The call arguments. See [AIModelCallArgs

AIModelCallArgs

][conatus.models.base.AIModelCallArgs] for more details about the

AIModelCallArgs

structure of the returned object.

Source code in conatus/models/base.py
def prepare_call_args(
    self,
    prompt: AIPrompt,
    user_provided_config: ModelConfig | ModelConfigTD | None = None,
    *,
    computer_use_mode: bool = False,
    previous_messages_id: str | None = None,
) -> AIModelCallArgs:
    """Prepare the arguments for the call to the AI model.

    This method can be used as is, as long as you override the methods
    [`convert_messages_to_ai_model_format`
    ][conatus.models.base.BaseAIModel.convert_messages_to_ai_model_format]
    and [`convert_tools_to_ai_model_format`
    ][conatus.models.base.BaseAIModel.convert_tools_to_ai_model_format].

    If the user provides a dictionary, we ensure that the values of
    [`self.config`][conatus.models.base.BaseAIModel.config] are always
    honored, and that only the values provided by the user are used.

    Args:
        prompt: The prompt to send to the AI model.
        user_provided_config: The configuration for the AI model.
        computer_use_mode: Whether to use the computer use mode.
        previous_messages_id: The ID of the last response. If `None`, we
            will not pass the `previous_messages_id` argument to the AI
            model, and we will pass both new and previous messages to the
            model. If not `None`, we will pass only the new messages to the
            model, and we will use the `previous_messages_id` to identify
            the previous messages.

    Returns:
        The call arguments. See [`AIModelCallArgs`
        ][conatus.models.base.AIModelCallArgs] for more details about the
        structure of the returned object.
    """
    model_config = self.model_config.apply_config(
        new_config=user_provided_config,
        inplace=False,
    )
    model_config.computer_use_mode = computer_use_mode
    if previous_messages_id is not None:
        model_config.only_pass_new_messages = True
        model_config.previous_messages_id = previous_messages_id
    else:
        model_config.only_pass_new_messages = False
        model_config.previous_messages_id = CTUS_NOT_GIVEN

    system_message = (
        self.convert_system_message_to_ai_model_format(
            prompt.system_message, model_config
        )
        if prompt.system_message is not None
        else None
    )
    messages = self.convert_messages_to_ai_model_format(
        prompt,
        model_config,
        only_new_messages=model_config.only_pass_new_messages,
    )
    tools = self.convert_tools_to_ai_model_format(prompt, model_config)
    output_schema, conversion_was_necessary = (
        self.convert_output_schema_to_ai_model_format(prompt, model_config)
    )
    return AIModelCallArgs(
        model_config=model_config,
        system_message=system_message,
        messages=messages,
        tools=tools,
        output_schema=output_schema,
        output_schema_was_converted_to_item_object=conversion_was_necessary,
    )

convert_messages_to_ai_model_format

convert_messages_to_ai_model_format(
    prompt: AIPrompt,
    config: ModelConfig,
    *,
    only_new_messages: bool = False
) -> Iterable[MessageType]

Convert the messages to the AI model format.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
prompt

The prompt to convert. We pass the entire prompt in case processing on the prompt is needed (e.g. removing all the image content parts).

TYPE: AIPrompt

config

The configuration for the AI model.

TYPE: ModelConfig

only_new_messages

Whether to only convert the new messages. If True, we will only convert the new messages, and we will not include the previous messages. This should be only used when the previous_messages_id is not None.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Iterable[MessageType]

The converted messages. (Don't be fooled by the MessageType -- the messages can be of any type.)

Source code in conatus/models/base.py
def convert_messages_to_ai_model_format(
    self,
    prompt: AIPrompt,
    config: ModelConfig,
    *,
    only_new_messages: bool = False,
) -> Iterable[MessageType]:  # pragma: no cover
    """Convert the messages to the AI model format.

    This method is meant to be overridden by subclasses.

    Args:
        prompt: The prompt to convert. We pass the entire prompt in case
            processing on the prompt is needed (e.g. removing all the
            image content parts).
        config: The configuration for the AI model.
        only_new_messages: Whether to only convert the new messages. If
            `True`, we will only convert the new messages, and we will
            not include the previous messages. This should be only used
            when the `previous_messages_id` is not `None`.

    Returns:
        The converted messages. (Don't be fooled by the [`MessageType`
            ][conatus.models.base.MessageType] -- the messages can
            be of any type.)
    """
    raise NotImplementedError

convert_tools_to_ai_model_format

convert_tools_to_ai_model_format(
    prompt: AIPrompt, config: ModelConfig
) -> Iterable[ToolType] | None

Convert the tools to the AI model format.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
prompt

The prompt to convert. We pass the entire prompt in case processing on the tools is needed (e.g. removing certain incompatible tools).

TYPE: AIPrompt

config

The configuration for the AI model.

TYPE: ModelConfig

RETURNS DESCRIPTION
Iterable[ToolType] | None

The converted tools. (Don't be fooled by the ToolType -- the tools can be of any type.)

Source code in conatus/models/base.py
def convert_tools_to_ai_model_format(
    self, prompt: AIPrompt, config: ModelConfig
) -> Iterable[ToolType] | None:  # pragma: no cover
    """Convert the tools to the AI model format.

    This method is meant to be overridden by subclasses.

    Args:
        prompt: The prompt to convert. We pass the entire prompt in case
            processing on the tools is needed (e.g. removing certain
            incompatible tools).
        config: The configuration for the AI model.

    Returns:
        The converted tools. (Don't be fooled by the [`ToolType`
            ][conatus.models.base.ToolType] -- the tools can be of any
            type.)
    """
    raise NotImplementedError

convert_system_message_to_ai_model_format

convert_system_message_to_ai_model_format(
    system_message: SystemAIMessage, config: ModelConfig
) -> MessageType

Convert the system message to the AI model format.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
system_message

The system message to convert.

TYPE: SystemAIMessage

config

The configuration for the AI model.

TYPE: ModelConfig

RETURNS DESCRIPTION
MessageType

The converted system message.

Source code in conatus/models/base.py
def convert_system_message_to_ai_model_format(
    self,
    system_message: SystemAIMessage,
    config: ModelConfig,
) -> MessageType:  # pragma: no cover
    """Convert the system message to the AI model format.

    This method is meant to be overridden by subclasses.

    Args:
        system_message: The system message to convert.
        config: The configuration for the AI model.

    Returns:
        The converted system message.
    """
    raise NotImplementedError

convert_output_schema_to_ai_model_format

convert_output_schema_to_ai_model_format(
    prompt: AIPrompt, config: ModelConfig
) -> (
    tuple[OutputJSONSchemaType, bool]
    | tuple[None, Literal[False]]
)

Convert the output schema to the AI model format.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
prompt

The prompt to convert.

TYPE: AIPrompt

config

The configuration for the AI model.

TYPE: ModelConfig

RETURNS DESCRIPTION
tuple[OutputJSONSchemaType, bool] | tuple[None, Literal[False]]

If the AI model requires an output schema, return a tuple with the output schema and a boolean indicating whether the schema has been converted from a non-object type to an object type. This is sometimes necessary because the AI model requires an object type and the output schema is a string or a list. In this case, the boolean should be True, because it will indicate further modifications. If there's no output schema, return None and False.

Source code in conatus/models/base.py
def convert_output_schema_to_ai_model_format(
    self, prompt: AIPrompt, config: ModelConfig
) -> (
    tuple[OutputJSONSchemaType, bool] | tuple[None, Literal[False]]
):  # pragma: no cover
    """Convert the output schema to the AI model format.

    This method is meant to be overridden by subclasses.

    Args:
        prompt: The prompt to convert.
        config: The configuration for the AI model.

    Returns:
        If the AI model requires an output schema, return a tuple with the
            output schema and a boolean indicating whether the schema has
            been converted from a non-object type to an object type. This is
            sometimes necessary because the AI model requires an object type
            and the output schema is a string or a list. In this case, the
            boolean should be [`True`][True], because it will indicate
            further modifications. If there's no output schema, return
            [`None`][None] and [`False`][False].

    """
    raise NotImplementedError

simple_call

simple_call(
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False
) -> str

Simple call to the AI model.

This is a convenience method for the call method.

from conatus.models import OpenAIModel

model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: str

model_config

The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

stream

Whether to stream the response. If True, the response will be streamed to the user. If False, the response will be returned as a string.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
str

The response from the AI model.

Source code in conatus/models/base.py
def simple_call(
    self,
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False,
) -> str:
    """Simple call to the AI model.

    This is a convenience method for the `call` method.

    ```python
    from conatus.models import OpenAIModel

    model = OpenAIModel()
    q = "Which US state has never recorded temperatures below 0°F?"
    response = model.simple_call(q)
    # > That would be Hawaii.
    ```

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model. Passing a
            dictionary is recommended, so that users don't unintentionally
            re-establish default values.
        stream: Whether to stream the response. If `True`, the response
            will be streamed to the user. If `False`, the response will
            be returned as a string.

    Returns:
        The response from the AI model.
    """
    ai_response = (
        self.call(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
        if not stream
        else self.call_stream(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
    )
    return ai_response.all_text or "<empty response>"

Type utilities

These are type utilities that are used in the BaseAIModel . They are useful to understand when implementing a subclass.

conatus.models.base.ClientType module-attribute

ClientType: TypeAlias = ParamType

Alias for the type of the client.

This is a convenience type alias to make the code more readable.

conatus.models.base.MessageType module-attribute

MessageType: TypeAlias = object

Alias for the type of messages expected by the model.

For example, OpenAI's chat.completions.create function expects an iterable of this type for its messages argument.

This can be anything, and is a convenience type alias to make the code more readable.

conatus.models.base.ToolType module-attribute

ToolType: TypeAlias = object

Alias for the type of the tools expected by the model.

For example, OpenAI's chat.completions.create function expects an iterable of this type for its tools argument.

This can be anything, and is a convenience type alias to make the code more readable.