Skip to content

Anthropic

conatus.models.anthropic.anthropic

Anthropic models.

DEFAULT_ANTHROPIC_MODEL_NAME module-attribute

DEFAULT_ANTHROPIC_MODEL_NAME = 'claude-3-7-sonnet-latest'

The default name of the Anthropic model to use.

DEFAULT_ANTHROPIC_MAX_TOKENS module-attribute

DEFAULT_ANTHROPIC_MAX_TOKENS = 1024

The default maximum number of tokens to generate.

AnthropicModelSpec

Bases: TypedDict

The specification for an Anthropic model.

betas instance-attribute

betas: list[AnthropicBetaParam]

Beta features to enable.

system instance-attribute

system: str | NotGiven

The system message to use.

tool_choice instance-attribute

tool_choice: BetaToolChoiceParam | NotGiven

The tool choice to use.

AnthropicModelConfig dataclass

AnthropicModelConfig(
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    api_key: OptionalArg[str] = CTUS_NOT_GIVEN,
    model_name: str = DEFAULT_ANTHROPIC_MODEL_NAME,
    max_tokens: int = DEFAULT_ANTHROPIC_MAX_TOKENS,
    stdout_mode: OptionalArg[
        Literal["normal", "preview", "silent"]
    ] = CTUS_NOT_GIVEN,
    temperature: OptionalArg[float] = CTUS_NOT_GIVEN,
    computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN,
    use_mock: OptionalArg[bool] = CTUS_NOT_GIVEN,
    only_pass_new_messages: OptionalArg[
        bool
    ] = CTUS_NOT_GIVEN,
    previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN,
    truncation: OptionalArg[
        Literal["auto", "disabled"]
    ] = CTUS_NOT_GIVEN,
)

Bases: ModelConfig

The configuration for an Anthropic model, with defaults.

max_tokens class-attribute instance-attribute

The maximum number of tokens to generate.

model_name class-attribute instance-attribute

The name of the model to use.

not_given_sentinel class-attribute instance-attribute

not_given_sentinel: object = CTUS_NOT_GIVEN

The sentinel object to use for missing arguments.

This is used to represent a missing argument. If we encounter this sentinel object, we will not include it in the returned dictionary.

api_key class-attribute instance-attribute

The API key to use, if any.

If not provided, the API key will be taken from the environment variable specified in the api_key_env_variable attribute of the model.

stdout_mode class-attribute instance-attribute

stdout_mode: OptionalArg[
    Literal["normal", "preview", "silent"]
] = CTUS_NOT_GIVEN

The mode to use for the standard output.

  • 'normal': Notify the user that we're waiting for a response, and then that we're receiving the response, displaying the number of chunks received so far.
  • 'preview': Preview the response with a fancy output that updates as the response chunks are received. Only works if the response is a stream. If preview is set and the response is not a stream, it will default to 'normal'.
  • 'silent': Do not print anything to the standard output.

Note that if we detect that we are running in a non TTY environment, we will use a special mode called 'non_tty', unless the user asked for 'silent'.

temperature class-attribute instance-attribute

The temperature for the model.

computer_use_mode class-attribute instance-attribute

computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN

Whether to use the computer use mode.

If set to True, the model will be configured to use the computer use mode.

use_mock class-attribute instance-attribute

Whether to use a mock client or not.

This is useful for testing purposes.

only_pass_new_messages class-attribute instance-attribute

only_pass_new_messages: OptionalArg[bool] = CTUS_NOT_GIVEN

Whether to only pass new messages to the model.

If set to True, the model will only pass new messages to the model, and not the entire history. This is useful for "stateful" APIs, where the history is not needed.

previous_messages_id class-attribute instance-attribute

previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN

The ID of the previous messages.

This is useful for "stateful" APIs, where the history is not needed. This should only be used if only_pass_new_messages is True.

truncation class-attribute instance-attribute

truncation: OptionalArg[Literal["auto", "disabled"]] = (
    CTUS_NOT_GIVEN
)

The truncation to use.

to_kwargs

to_kwargs(
    specification: None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD
to_kwargs(
    specification: type[TDSpec],
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> TDSpec
to_kwargs(
    specification: type[TDSpec] | None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec

Return the configuration as a dictionary.

You can provide a specification, which is a dictionary that matches the arguments expected by the provider. If a specification is provided, the method will return a dictionary that matches the specification (i.e. with only the keys that are expected by the provider).

You can also provide a not_given_sentinel, which is an object that is used to represent a missing argument. If we encounter this sentinel object, we will not include it in the returned dictionary.

Example
Using a specification
from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
from openai import NOT_GIVEN

model = OpenAIModel()

args_to_pass = model.config.to_kwargs(
    specification=OpenAIModelCCSpec,
    not_given_sentinel=NOT_GIVEN,
)

assert args_to_pass == {'max_tokens': 4096}

# And now you can do something like:
# response = self.client.chat.completions.create(
#         messages=messages,
#         **args_to_pass
#  )
Using an argument mapping
from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
from openai import NOT_GIVEN

model = OpenAIModel()

args_to_pass = model.config.to_kwargs(
    specification=OpenAIModelResponseSpec,
    argument_mapping={"max_tokens": "max_output_tokens"},
    not_given_sentinel=NOT_GIVEN,
)

assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
PARAMETER DESCRIPTION
specification

The specification to use. This should be a TypedDict; if it's not, the method will throw a TypeError. If no specification is provided, the method will return all the keys of the configuration.

TYPE: type[TDSpec] | None DEFAULT: None

not_given_sentinel

The sentinel object to use.

TYPE: object DEFAULT: CTUS_NOT_GIVEN

argument_mapping

A dictionary that maps the keys of the configuration to the keys of the provider. The mapping is of the form {original_key: new_key, ...}.

TYPE: dict[str, str] | None DEFAULT: None

RETURNS DESCRIPTION
ModelConfigTD | TDSpec

The configuration as a dictionary.

RAISES DESCRIPTION
TypeError

If the specification is not a TypedDict .

Source code in conatus/models/config.py
def to_kwargs(
    self,
    specification: type[TDSpec] | None = None,
    not_given_sentinel: object = CTUS_NOT_GIVEN,
    argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec:
    """Return the configuration as a dictionary.

    You can provide a specification, which is a dictionary that
    matches the arguments expected by the provider. If a specification
    is provided, the method will return a dictionary that matches the
    specification (i.e. with only the keys that are expected by the
    provider).

    You can also provide a `not_given_sentinel`, which is an object
    that is used to represent a missing argument. If we encounter this
    sentinel object, we will not include it in the returned dictionary.

    # Example

    ## Using a specification

    ```python
    from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
    from openai import NOT_GIVEN

    model = OpenAIModel()

    args_to_pass = model.config.to_kwargs(
        specification=OpenAIModelCCSpec,
        not_given_sentinel=NOT_GIVEN,
    )

    assert args_to_pass == {'max_tokens': 4096}

    # And now you can do something like:
    # response = self.client.chat.completions.create(
    #         messages=messages,
    #         **args_to_pass
    #  )
    ```

    ## Using an argument mapping

    ```python
    from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
    from openai import NOT_GIVEN

    model = OpenAIModel()

    args_to_pass = model.config.to_kwargs(
        specification=OpenAIModelResponseSpec,
        argument_mapping={"max_tokens": "max_output_tokens"},
        not_given_sentinel=NOT_GIVEN,
    )

    assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
    ```

    Args:
        specification: The specification to use. This should be a
            [`TypedDict`][typing.TypedDict]; if it's not, the method will
            throw a `TypeError`. If no specification is provided, the
            method will return all the keys of the configuration.
        not_given_sentinel: The sentinel object to use.
        argument_mapping: A dictionary that maps the keys of the
            configuration to the keys of the provider. The mapping is
            of the form `{original_key: new_key, ...}`.

    Returns:
        The configuration as a dictionary.

    Raises:
        TypeError: If the specification is not a [`TypedDict`
            ][typing.TypedDict].
    """
    keys: list[str]
    not_given_sentinels = {
        CTUS_NOT_GIVEN,
        self.not_given_sentinel,
        not_given_sentinel,
    }
    # If a specification exists, we assume it's a TypedDict,
    # and extract the optional and required keys.
    if specification is not None:
        optional_keys = getattr(specification, "__optional_keys__", None)
        required_keys = getattr(specification, "__required_keys__", None)
        if optional_keys is None or required_keys is None:
            msg = "The specification must be a TypedDict."
            raise TypeError(msg)
        keys = [
            *(cast("frozenset[str]", required_keys)),
            *(cast("frozenset[str]", optional_keys)),
        ]
    # Otherwise, we just return all the keys.
    else:
        keys = list[str](self.__dict__.keys())

    config_as_dict = dict(self.__dict__.items())
    if argument_mapping is not None:
        for k, v in argument_mapping.items():
            if k in config_as_dict:
                config_as_dict[v] = config_as_dict[k]
                del config_as_dict[k]

    return cast(
        "ModelConfigTD | TDSpec",
        {
            k: v
            for k, v in config_as_dict.items()  # pyright: ignore[reportAny]
            if v not in not_given_sentinels and k in keys
        },
    )

from_dict classmethod

from_dict(config: ModelConfigTD) -> Self

Create a new instance from a dictionary.

PARAMETER DESCRIPTION
config

The configuration as a dictionary.

TYPE: ModelConfigTD

RETURNS DESCRIPTION
Self

The new instance.

Source code in conatus/models/config.py
@classmethod
def from_dict(cls, config: ModelConfigTD) -> Self:
    """Create a new instance from a dictionary.

    Args:
        config: The configuration as a dictionary.

    Returns:
        The new instance.
    """
    return cls(**config)

from_dict_instance_or_none classmethod

from_dict_instance_or_none(
    config: Self | ModelConfigTD | None,
) -> Self

Create a new instance from a dictionary or an instance.

PARAMETER DESCRIPTION
config

The configuration as a dictionary or an instance.

TYPE: Self | ModelConfigTD | None

RETURNS DESCRIPTION
Self

The new instance.

Source code in conatus/models/config.py
@classmethod
def from_dict_instance_or_none(
    cls, config: Self | ModelConfigTD | None
) -> Self:
    """Create a new instance from a dictionary or an instance.

    Args:
        config: The configuration as a dictionary or an instance.

    Returns:
        The new instance.
    """
    if config is None:
        return cls()
    if isinstance(config, Mapping):
        return cls.from_dict(config)
    return config

apply_config

apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: Literal[True]
) -> None
apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: Literal[False] = False
) -> Self
apply_config(
    new_config: Self | ModelConfigTD | None,
    *,
    inplace: bool = False
) -> Self | None

Copy the configuration and apply new values to it.

This ensures that you can create a hierarchy of configurations.

PARAMETER DESCRIPTION
new_config

The new configuration.

TYPE: Self | ModelConfigTD | None

inplace

Whether to update the instance in place, or return a new copy

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Self | None

None if the modification happens in place; otherwise, return a new instance with the modified configuration

Source code in conatus/models/config.py
def apply_config(
    self, new_config: Self | ModelConfigTD | None, *, inplace: bool = False
) -> Self | None:
    """Copy the configuration and apply new values to it.

    This ensures that you can create a hierarchy of configurations.

    Args:
        new_config: The new configuration.
        inplace: Whether to update the instance in place, or return a new
            copy

    Returns:
        None if the modification happens in place; otherwise, return a new
            instance with the modified configuration
    """
    if new_config is None:
        return self
    if isinstance(new_config, Mapping):
        new_config_as_dict = new_config
    else:
        new_config_as_dict = new_config.to_kwargs()
    if inplace:
        self.__dict__.update(new_config_as_dict)
        return None
    new_config_as_dict = self.to_kwargs() | new_config_as_dict
    return type(self).from_dict(new_config_as_dict)

AnthropicModelCallArgs dataclass

AnthropicModelCallArgs(
    model_config: AnthropicModelConfig,
    system_message: str | None,
    messages: Iterable[BetaMessageParam],
    tools: Iterable[BetaToolUnionParam] | None,
    output_schema: BetaToolParam | None,
    output_schema_was_converted_to_item_object: bool = False,
)

Bases: AIModelCallArgs

The arguments for the call to the Anthropic API.

model_config instance-attribute

model_config: AnthropicModelConfig

The configuration for the AI model.

system_message instance-attribute

system_message: str | None

The system message for the AI model.

messages instance-attribute

messages: Iterable[BetaMessageParam]

The messages for the AI model.

tools instance-attribute

tools: Iterable[BetaToolUnionParam] | None

The tools for the AI model.

output_schema instance-attribute

output_schema: BetaToolParam | None

The output schema for the AI model.

output_schema_was_converted_to_item_object class-attribute instance-attribute

output_schema_was_converted_to_item_object: bool = False

Whether the output schema was converted to an item object.

AnthropicAIModel dataclass

AnthropicAIModel(
    model_config: (
        AnthropicModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    client: AsyncAnthropic | None = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None
)

Bases: BaseAIModel

Anthropic model.

PARAMETER DESCRIPTION
model_config

The configuration for the Anthropic model. This can be an AnthropicModelConfig object, a ModelConfig object, or a dictionary.

TYPE: AnthropicModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

client

The client to use. If not provided, a new client will be created.

TYPE: AsyncAnthropic | None DEFAULT: None

get_default_client_if_not_given

Whether to get the default client if not provided.

TYPE: bool DEFAULT: True

api_key

The API key to use. If not provided, it will be read from the ANTHROPIC_API_KEY environment variable. If client is provided, api_key will be ignored.

TYPE: str | None DEFAULT: None

model_name

The name of the model to use. If not provided, it will be set to the default model name.

TYPE: str | None DEFAULT: None

model_type

The type of model to use. This is used to determine the model name only if model_name is not provided. If provided, overrides any model type in the config.

TYPE: ModelType | None DEFAULT: None

Source code in conatus/models/anthropic/anthropic.py
def __init__(
    self,
    model_config: AnthropicModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    client: AsyncAnthropic | None = None,
    *,
    get_default_client_if_not_given: bool = True,
    api_key: str | None = None,
    model_name: str | None = None,
    model_type: ModelType | None = None,
) -> None:
    """Initialize the Anthropic model.

    Args:
        model_config: The configuration for the Anthropic model. This can
            be an [`AnthropicModelConfig`
            ][conatus.models.anthropic.AnthropicModelConfig] object, a
            [`ModelConfig`][conatus.models.base.ModelConfig] object, or a
            dictionary.
        client: The client to use. If not provided, a new client will be
            created.
        get_default_client_if_not_given: Whether to get the default client
            if not provided.
        api_key: The API key to use. If not provided, it will be read from
            the `ANTHROPIC_API_KEY` environment variable. If `client` is
            provided, `api_key` will be ignored.
        model_name: The name of the model to use. If not provided, it will
            be set to the default model name.
        model_type: The type of model to use. This is used to determine the
            model name only if `model_name` is not provided. If provided,
            overrides any model type in the config.
    """
    super().__init__(
        model_config=model_config,
        client=client,
        api_key=api_key,
        model_name=model_name,
        model_type=model_type,
        get_default_client_if_not_given=get_default_client_if_not_given,
    )
    logger.info(
        "Initializing Anthropic model: %s", self.model_config.model_name
    )

model_config instance-attribute

model_config: AnthropicModelConfig

The configuration for the Anthropic model.

client instance-attribute

client: AsyncAnthropic

The Anthropic client.

provider class-attribute instance-attribute

provider: ProviderName = 'anthropic'

The provider name.

api_key_env_variable class-attribute instance-attribute

api_key_env_variable: str = 'ANTHROPIC_API_KEY'

The environment variable that contains the API key.

model_config_cls instance-attribute

model_config_cls: type[ModelConfig]

The class of the model configuration.

config property

config: ModelConfig

The configuration for the model.

This is a convenience property for the model_config attribute.

default_client

default_client(
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType
) -> AsyncAnthropic

Return the default client for the Anthropic model.

PARAMETER DESCRIPTION
model_config

The configuration for the Anthropic model.

TYPE: ModelConfig

api_key

The API key for the Anthropic model.

TYPE: str | None

**kwargs

Additional arguments to pass to the Anthropic client.

TYPE: ParamType DEFAULT: {}

Source code in conatus/models/anthropic/anthropic.py
@override
def default_client(
    self,
    model_config: ModelConfig,
    api_key: str | None,
    **kwargs: ParamType,
) -> AsyncAnthropic:
    """Return the default client for the Anthropic model.

    Args:
        model_config: The configuration for the Anthropic model.
        api_key: The API key for the Anthropic model.
        **kwargs: Additional arguments to pass to the Anthropic client.
    """
    api_key = api_key or model_config.api_key or self.get_api_key()
    return AsyncAnthropic(api_key=api_key)

default_config

default_config() -> AnthropicModelConfig

Return the default configuration for the model.

Source code in conatus/models/anthropic/anthropic.py
@override
def default_config(self) -> AnthropicModelConfig:
    """Return the default configuration for the model."""
    return AnthropicModelConfig()

default_model_name classmethod

default_model_name(
    model_type: ModelType | None,
) -> ModelName | None

Get the default model name for the Anthropic model.

PARAMETER DESCRIPTION
model_type

The type of model to use.

TYPE: ModelType | None

RETURNS DESCRIPTION
ModelName | None

The default model name for the Anthropic model.

Source code in conatus/models/anthropic/anthropic.py
@classmethod
@override
def default_model_name(
    cls, model_type: ModelType | None
) -> ModelName | None:
    """Get the default model name for the Anthropic model.

    Args:
        model_type: The type of model to use.

    Returns:
        The default model name for the Anthropic model.
    """
    if model_type is None:
        return None
    match model_type:
        case "chat":
            return "anthropic:claude-3-7-sonnet-latest"
        case "execution":
            return "anthropic:claude-3-7-sonnet-latest"
        case "computer_use":
            return "anthropic:claude-3-7-sonnet-20250219"
        case "reasoning":
            return "anthropic:claude-3-7-sonnet-latest"

__del__

__del__() -> None

Delete the model.

Source code in conatus/models/anthropic/anthropic.py
def __del__(self) -> None:
    """Delete the model."""
    if getattr(self, "client", None):
        asyncio.run(self.client.close())

call

call(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        AnthropicModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the Anthropic model using the standardized prompt and response.

from conatus import AIPrompt
from conatus.models.anthropic import AnthropicAIModel

model = AnthropicAIModel()
prompt = AIPrompt("Hello, how are you?")
response = model.call(prompt)
PARAMETER DESCRIPTION
prompt

The prompt to send to the Anthropic model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the Anthropic model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: AnthropicModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

**kwargs

Additional arguments to pass to the Anthropic model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the Anthropic model.

Source code in conatus/models/anthropic/anthropic.py
@override
def call(  # type: ignore[override]
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: AnthropicModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the Anthropic model using the standardized prompt and response.

    ```python
    from conatus import AIPrompt
    from conatus.models.anthropic import AnthropicAIModel

    model = AnthropicAIModel()
    prompt = AIPrompt("Hello, how are you?")
    response = model.call(prompt)
    ```

    Args:
        prompt: The prompt to send to the Anthropic model.
        model_config: The configuration to use for the Anthropic model.
            Passing a dictionary is recommended, so that users don't
            unintentionally re-establish default values.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback: A callback for debugging purposes. This
            callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string.
        **kwargs: Additional arguments to pass to the Anthropic model.

    Returns:
        The response from the Anthropic model.
    """
    return run_async(
        self.acall(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback=response_log_callback,
            **kwargs,
        )
    )

acall async

acall(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        AnthropicModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback: (
        Callable[[str], None] | None
    ) = None,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the Anthropic model using the standardized prompt and response.

For its async counterpart, see acall .

PARAMETER DESCRIPTION
prompt

The prompt to send to the Anthropic model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the Anthropic model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: AnthropicModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback

A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the Anthropic model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the Anthropic model.

Source code in conatus/models/anthropic/anthropic.py
@override
async def acall(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: AnthropicModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback: Callable[[str], None] | None = None,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the Anthropic model using the standardized prompt and response.

    For its async counterpart, see [`acall`
    ][conatus.models.anthropic.AnthropicAIModel.acall].

    Args:
        prompt: The prompt to send to the Anthropic model.
        model_config: The configuration to use for the Anthropic model.
            Passing a dictionary is recommended, so that users don't
            unintentionally re-establish default values.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback: A callback for debugging purposes. This
            callback will be called with the response information (
            e.g. the response, the model name, the usage, etc.) as a JSON
            string.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the Anthropic model.

    Returns:
        The response from the Anthropic model.
    """
    args, printing_mixin, config_kwargs = self._prepare_anthropic(
        prompt, model_config, printing_mixin_cls, prompt_log_callback
    )
    response = await self.client.beta.messages.create(
        model=args.model_config.model_name,
        messages=args.messages,
        stream=False,
        tools=args.tools or ANTHROPIC_NOT_GIVEN,
        max_tokens=args.model_config.max_tokens,
        **config_kwargs,
    )
    if response_log_callback is not None:
        response_log_callback(response.model_dump_json())
    ai_response = AnthropicConverters.anthropic_response_to_ai_response(
        response, prompt
    )
    printing_mixin.clean_after_receiving()
    return ai_response.complete(
        output_schema_was_converted_to_item_object=args.output_schema_was_converted_to_item_object
    )

call_stream

call_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        AnthropicModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the Anthropic model using the standardized prompt and response.

For its async counterpart, see acall_stream .

PARAMETER DESCRIPTION
prompt

The prompt to send to the Anthropic model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the Anthropic model.

TYPE: AnthropicModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the Anthropic model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the Anthropic model.

Source code in conatus/models/anthropic/anthropic.py
@override
def call_stream(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: AnthropicModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the Anthropic model using the standardized prompt and response.

    For its async counterpart, see [`acall_stream`
    ][conatus.models.anthropic.AnthropicAIModel.acall_stream].

    Args:
        prompt: The prompt to send to the Anthropic model.
        model_config: The configuration to use for the Anthropic model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information
            (e.g. the response, the model name, the usage, etc.) as a
            JSON string. Note that this callback is called for each chunk
            of the response, and figures it out on the backend.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the Anthropic model.

    Returns:
        The response from the Anthropic model.
    """
    return run_async(
        self.acall_stream(
            prompt,
            model_config,
            printing_mixin_cls=printing_mixin_cls,
            prompt_log_callback=prompt_log_callback,
            response_log_callback_stream=response_log_callback_stream,
            **kwargs,
        )
    )

acall_stream async

acall_stream(
    prompt: AIPrompt[OutputSchemaType],
    model_config: (
        AnthropicModelConfig
        | ModelConfig
        | ModelConfigTD
        | None
    ) = None,
    *,
    printing_mixin_cls: type[
        AIModelPrintingMixin
    ] = AIModelPrintingMixin,
    prompt_log_callback: (
        Callable[[str], None] | None
    ) = None,
    response_log_callback_stream: (
        Callable[[str], None] | None
    ) = None,
    **kwargs: ParamType
) -> AIResponse[OutputSchemaType]

Call the Anthropic model using the standardized prompt and response.

For its sync counterpart, see call_stream .

PARAMETER DESCRIPTION
prompt

The prompt to send to the Anthropic model.

TYPE: AIPrompt[OutputSchemaType]

model_config

The configuration to use for the Anthropic model.

TYPE: AnthropicModelConfig | ModelConfig | ModelConfigTD | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

prompt_log_callback

A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string.

TYPE: Callable[[str], None] | None DEFAULT: None

response_log_callback_stream

A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend.

TYPE: Callable[[str], None] | None DEFAULT: None

printing_mixin_cls

The class to use for printing.

TYPE: type[AIModelPrintingMixin] DEFAULT: AIModelPrintingMixin

**kwargs

Additional arguments to pass to the Anthropic model.

TYPE: ParamType DEFAULT: {}

RETURNS DESCRIPTION
AIResponse[OutputSchemaType]

The response from the Anthropic model.

Source code in conatus/models/anthropic/anthropic.py
@override
async def acall_stream(
    self,
    prompt: AIPrompt[OutputSchemaType],
    model_config: AnthropicModelConfig
    | ModelConfig
    | ModelConfigTD
    | None = None,
    *,
    printing_mixin_cls: type[AIModelPrintingMixin] = AIModelPrintingMixin,
    prompt_log_callback: Callable[[str], None] | None = None,
    response_log_callback_stream: Callable[[str], None] | None = None,
    **kwargs: ParamType,
) -> AIResponse[OutputSchemaType]:
    """Call the Anthropic model using the standardized prompt and response.

    For its sync counterpart, see [`call_stream`
    ][conatus.models.anthropic.AnthropicAIModel.call_stream].

    Args:
        prompt: The prompt to send to the Anthropic model.
        model_config: The configuration to use for the Anthropic model.
        printing_mixin_cls: The class to use for printing.
        prompt_log_callback: A callback for debugging purposes. This
            callback will be called with the prompt information (
            e.g. the messages, the model name, the tools, etc.) as a JSON
            string.
        response_log_callback_stream: A callback for debugging purposes.
            This callback will be called with the response information
            (e.g. the response, the model name, the usage, etc.) as a
            JSON string. Note that this callback is called for each chunk
            of the response, and figures it out on the backend.
        printing_mixin_cls: The class to use for printing.
        **kwargs: Additional arguments to pass to the Anthropic model.

    Returns:
        The response from the Anthropic model.
    """
    args, printing_mixin, config_kwargs = self._prepare_anthropic(
        prompt, model_config, printing_mixin_cls, prompt_log_callback
    )
    response_stream: AsyncStream[
        BetaRawMessageStreamEvent
    ] = await self.client.beta.messages.create(
        model=args.model_config.model_name,
        messages=args.messages,
        stream=True,
        max_tokens=args.model_config.max_tokens,
        tools=args.tools or ANTHROPIC_NOT_GIVEN,
        **config_kwargs,
    )
    ai_response = IncompleteAIResponse(prompt=prompt)
    async for chunk in response_stream:
        if response_log_callback_stream is not None:
            response_log_callback_stream(chunk.model_dump_json())
        ai_response += (
            AnthropicConverters.anthropic_chunk_to_incomplete_ai_response(
                chunk, prompt, args.model_config.model_name
            )
        )
        printing_mixin.write_preview_response(ai_response)
    printing_mixin.clean_after_receiving()
    return ai_response.complete(
        output_schema_was_converted_to_item_object=args.output_schema_was_converted_to_item_object
    )

convert_system_message_to_ai_model_format

convert_system_message_to_ai_model_format(
    system_message: SystemAIMessage, config: ModelConfig
) -> str

Convert the system message to the AI model format.

This method is meant to be overridden by subclasses.

PARAMETER DESCRIPTION
system_message

The system message to convert.

TYPE: SystemAIMessage

config

The configuration for the AI model.

TYPE: ModelConfig

RETURNS DESCRIPTION
str

The converted system message.

Source code in conatus/models/anthropic/anthropic.py
@override
def convert_system_message_to_ai_model_format(
    self,
    system_message: SystemAIMessage,
    config: ModelConfig,
) -> str:
    """Convert the system message to the AI model format.

    This method is meant to be overridden by subclasses.

    Args:
        system_message: The system message to convert.
        config: The configuration for the AI model.

    Returns:
        The converted system message.
    """
    return system_message.content

with_config

with_config(
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False
) -> Self

Return a new instance of the model with the given configuration.

This is useful for quickly creating a new model without having to instantiate a new client.

from conatus.models import OpenAIModel
from conatus.models.config import ModelConfig

model = OpenAIModel()

model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

# Note that this also works if you pass a dictionary.
model_with_config = model.with_config({"model_name": "gpt-4o"})

assert model_with_config.config.model_name == "gpt-4o"
assert model_with_config.client == model.client
PARAMETER DESCRIPTION
model_config

The configuration for the new model.

TYPE: ModelConfig | ModelConfigTD | None

ignore_current_config

Whether to ignore the current configuration. If True, the new configuration will replace the current configuration. If False, the new configuration will be merged with the current configuration.

TYPE: bool DEFAULT: False

inplace

Whether to modify the current instance in place. If True, the current instance will be modified in place. If False, a new instance will be returned.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Self

A new instance of the model with the given configuration.

Source code in conatus/models/base.py
def with_config(
    self,
    model_config: ModelConfig | ModelConfigTD | None,
    *,
    ignore_current_config: bool = False,
    inplace: bool = False,
) -> Self:
    """Return a new instance of the model with the given configuration.

    This is useful for quickly creating a new model without having to
    instantiate a new client.

    ```python
    from conatus.models import OpenAIModel
    from conatus.models.config import ModelConfig

    model = OpenAIModel()

    model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))

    # Note that this also works if you pass a dictionary.
    model_with_config = model.with_config({"model_name": "gpt-4o"})

    assert model_with_config.config.model_name == "gpt-4o"
    assert model_with_config.client == model.client
    ```

    Args:
        model_config: The configuration for the new model.
        ignore_current_config: Whether to ignore the current configuration.
            If `True`, the new configuration will replace the current
            configuration. If `False`, the new configuration will be merged
            with the current configuration.
        inplace: Whether to modify the current instance in place. If `True`,
            the current instance will be modified in place. If `False`, a
            new instance will be returned.

    Returns:
        A new instance of the model with the given configuration.
    """
    new_config = (
        self.model_config.apply_config(
            new_config=model_config,
            inplace=False,
        )
        if not ignore_current_config
        else type(self.model_config).from_dict_instance_or_none(
            model_config
        )
    )
    if inplace:
        self.model_config = new_config
        return self
    return type(self)(
        model_config=new_config,
        client=self.client,
    )

get_api_key

get_api_key() -> str

Get the API key for the model.

This function should be implemented to retrieve environment variables.

RETURNS DESCRIPTION
str

The API key.

RAISES DESCRIPTION
AIModelAPIKeyMissingError

If the API key is not found in the environment variables.

ValueError

If the API key is not set in the class attribute.

Source code in conatus/models/base.py
def get_api_key(self) -> str:
    """Get the API key for the model.

    This function should be implemented to retrieve environment variables.

    Returns:
        The API key.

    Raises:
        AIModelAPIKeyMissingError: If the API key is not found in the
            environment variables.
        ValueError: If the API key is not set in the class attribute.
    """
    do_load_dotenv = (
        os.environ.get("TEST_DO_NOT_LOAD_DOTENV", "false").lower() != "true"
    )
    if do_load_dotenv and (
        "PYTEST_CURRENT_TEST" not in os.environ
    ):  # pragma: no branch
        _ = load_dotenv()  # pragma: no cover
    if getattr(self, "api_key_env_variable", None) is None:
        msg = (
            "You need to set the `api_key_env_variable` class attribute "
            "in the subclass.\n"
        )
        raise ValueError(msg)
    if self.api_key_env_variable not in os.environ:
        msg = (
            f"You need to set the {self.api_key_env_variable} "
            "environment variable.\n"
        )
        raise AIModelAPIKeyMissingError(msg)

    return os.environ[self.api_key_env_variable]

respawn_client

respawn_client() -> None

Respawn the client.

This method is used to respawn the client. It is mostly used so that we can refresh the client, which might be associated with an incompatible event loop.

Source code in conatus/models/base.py
def respawn_client(self) -> None:
    """Respawn the client.

    This method is used to respawn the client. It is mostly used so that
    we can refresh the client, which might be associated with an
    incompatible event loop.
    """
    with contextlib.suppress(RuntimeError):
        del self.client
    # We only cover this part in testing
    if (
        os.environ.get("ALWAYS_USE_MOCK", "false").lower() == "true"
        or self.model_config.use_mock
    ):  # pragma: no branch
        self.model_config.use_mock = True
        logger.info("Using mock client for %s", self.__class__.__name__)
        self.client = None
        return
    self.client = self.default_client(  # pragma: no cover
        model_config=self.model_config,
        api_key=self.model_config.api_key or self.get_api_key(),
    )

simple_call

simple_call(
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False
) -> str

Simple call to the AI model.

This is a convenience method for the call method.

from conatus.models import OpenAIModel

model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
PARAMETER DESCRIPTION
prompt

The prompt to send to the AI model.

TYPE: str

model_config

The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.

TYPE: ModelConfig | ModelConfigTD | None DEFAULT: None

stream

Whether to stream the response. If True, the response will be streamed to the user. If False, the response will be returned as a string.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
str

The response from the AI model.

Source code in conatus/models/base.py
def simple_call(
    self,
    prompt: str,
    model_config: ModelConfig | ModelConfigTD | None = None,
    *,
    stream: bool = False,
) -> str:
    """Simple call to the AI model.

    This is a convenience method for the `call` method.

    ```python
    from conatus.models import OpenAIModel

    model = OpenAIModel()
    q = "Which US state has never recorded temperatures below 0°F?"
    response = model.simple_call(q)
    # > That would be Hawaii.
    ```

    Args:
        prompt: The prompt to send to the AI model.
        model_config: The configuration for the AI model. Passing a
            dictionary is recommended, so that users don't unintentionally
            re-establish default values.
        stream: Whether to stream the response. If `True`, the response
            will be streamed to the user. If `False`, the response will
            be returned as a string.

    Returns:
        The response from the AI model.
    """
    ai_response = (
        self.call(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
        if not stream
        else self.call_stream(
            prompt=AIPrompt(prompt),
            model_config=model_config,
        )
    )
    return ai_response.all_text or "<empty response>"