Skip to content

AI inputs and outputs: An overview

You might be looking for...

Introduction

The specification of our AI models classes relies on two classes defined here:

If you want to customize the AI prompt

If you want to create a custom AI prompt, you can use the AIPrompt constructor as follows:

from conatus import action, AIPrompt
from conatus.models.inputs_outputs.messages import UserAIMessage

@action
def simple_calculator(a: int, b: int) -> int:
    return a + b

prompt = AIPrompt(
    user="What is the sum of 2222 and 3828?",
    system="Use the tools if necessary",
    actions=[simple_calculator],
)

If you want to integrate specific AI models

New AI models will have to implement mapping functions that convert the AIPrompt to the specific AI model's format, and the AIResponse to the specific AI model's format.

That work, unfortunately, is somewhat tedious. You can look at the OpenAIModel implementation for an example.

One category of classes you might want to rely on are the 'incomplete' messages. These are messages (mostly counterparts to AssistantAIMessage and its components) that are not yet complete, but can be added to each other seamlessly with the + operator. (See Incomplete messages for more information.)

Visual explanation

This visualization should help you understand the class hierarchy of the messages:

Message class hierarchy

Main classes

conatus.models.inputs_outputs.prompt.AIPrompt

AIPrompt(
    user: str,
    *,
    system: str | SystemAIMessage | None = None,
    messages: None = None,
    previous_messages: None = None,
    new_messages: None = None,
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: None = None,
    output_schema: type[OutputSchemaType] | None = None
)
AIPrompt(
    user: None = None,
    *,
    system: str | SystemAIMessage | None = None,
    messages: Iterable[ConversationAIMessage],
    previous_messages: None = None,
    new_messages: None = None,
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: None = None,
    output_schema: type[OutputSchemaType] | None = None
)
AIPrompt(
    user: None = None,
    *,
    system: str | SystemAIMessage | None = None,
    messages: None = None,
    previous_messages: Iterable[ConversationAIMessage],
    new_messages: Iterable[ConversationAIMessage],
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: str | None = None,
    output_schema: type[OutputSchemaType] | None = None
)
AIPrompt(
    user: None = None,
    *,
    system: str | SystemAIMessage | None = None,
    messages: None = None,
    previous_messages: None = None,
    new_messages: Iterable[ConversationAIMessage],
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: str,
    output_schema: type[OutputSchemaType] | None = None
)
AIPrompt()
AIPrompt(
    user: str | None = None,
    *,
    system: str | SystemAIMessage | None = None,
    messages: Iterable[ConversationAIMessage] | None = None,
    previous_messages: (
        Iterable[ConversationAIMessage] | None
    ) = None,
    new_messages: (
        Iterable[ConversationAIMessage] | None
    ) = None,
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: str | None = None,
    output_schema: type[OutputSchemaType] | None = None
)

Bases: Generic[OutputSchemaType]

Standardized structure for the AI prompt.

Look at the documentation below for more information.

ATTRIBUTE DESCRIPTION
system_message

The system message for the AI prompt.

TYPE: SystemAIMessage | None

previous_messages

The previous messages for the AI prompt. This is optional, and if not provided, the AI prompt will be considered to be in the initial state. Note that it does not include the system message.

TYPE: Iterable[ConversationAIMessage] | None

previous_messages_id

The ID of the last messages. Some AI providers like to have this ID in order to link to previous responses. If that is provided, BaseAIModel will generally ignore the previous_messages attribute.

TYPE: str | None

new_messages

The new messages for the AI prompt. If previous_messages is None, this is equivalent to the list of messages. Note that it does not include the system message.

TYPE: Iterable[ConversationAIMessage]

tools

The tools specifications to pass to the AI.

TYPE: Iterable[AIToolSpecification] | None

computer_use_config

The configuration for the computer use mode.
If this attribute is None, BaseAIModel should avoid invoking computer tools, even if they are available. If the attribute is not None, the AI model will generally invoke the computer tools.

TYPE: ComputerUseConfig | None

output_schema

The (optional) schema of the payload of the AI prompt. If provided, the AI model will be encouraged to return a payload that matches the schema.

TYPE: type[OutputSchemaType] | None

There are three ways to initialize the AI prompt:

  1. Simple, new conversation: Pass a string to user. This will create a new conversation with a single message.
  2. Conversation with multiple messages: Pass a list of messages to messages. This will create a new conversation with the given messages.
  3. Conversation with history: You can make a distinction between previous and new messages. This is helpful because some AI providers (like OpenAI's Response API) allow to send only new messages, as long as you provide a previous messages ID. In this case, you need to pass(1) a list of new_messages as well as (2) a list of previous_messages, previous_messages_id, or both.

In each case, you can optionally pass a system message with system, a list of tool specifications with tools, a list of actions with actions, and an output schema with output_schema.

Using tools and actions

There are two arguments here:

  • tools: This is a list of tool specifications, which is a data structure containing the name of the tool and the JSON schema of its specification.
  • actions: This is a list of actions (but it can be anything from an ActionStarterPack to normal functions). We handle the processing of converting these actions for you.

The two arguments are not mutually exclusive. If you pass both, the actions will be converted to tools, and added to the list.

actions, unlike tools, need to be JSON serializable

We assume that users that pass actions to AIPrompt are only passing actions that are JSON serializable. If you want to pass actions with variables to references, you need to manually create the AIToolSpecification objects and pass them to the tools argument.

Giving an output schema

You can also pass an output schema with output_schema. This communicates to the AI provider that we want the response to follow that format.

If such an output schema is provided, the BaseAIModel will create an AIResponse object with the output schema type as a generic.

Under the hood, we use a TypeAdapter to convert the output schema to a Pydantic model. Most JSON-serializable types (including TypedDicts, BaseModel s, dataclass, etc.) should work flawlessly.

There are limitations to what BaseAIModel classes and AIResponse deal with output schemas.

Examples

Initialize from a simple string

from conatus import AIPrompt
from conatus.models import OpenAIModel

prompt = AIPrompt(
    "What did Kurt Gödel say about the US Constitution?"
)
response = OpenAIModel().call(prompt)

Initialize from a list of messages

from conatus import AIPrompt
from conatus.models import OpenAIModel
from conatus.models.inputs_outputs.messages import (
    SystemAIMessage,
    UserAIMessage,
    AssistantAIMessage,
)

prompt = AIPrompt(
    system=SystemAIMessage(content="You answer very tersely."),
    messages=[
        UserAIMessage(content="quick, give me a number"),
        AssistantAIMessage(content="1234"),
    ]
)
response = OpenAIModel().call(prompt)

Initialize with a list of actions

from conatus import action, AIPrompt
from conatus.models import OpenAIModel

@action
def add_two_numbers(a: int, b: int) -> int:
    return a + b

prompt = AIPrompt(
    user="what is the sum of 2222 and 3828?",
    system="Use the tools if necessary",
    actions=[add_two_numbers],
)

response = OpenAIModel().call(prompt)
print(response.tool_calls)  # (1)!
# {"name": "add_two_numbers", "arguments": {"a": 2222, "b": 3828}}
  1. The printing response will actually look a little different in the real world, but you get the idea.

Initialize with an expected output schema

from typing import TypedDict
from conatus import AIPrompt
from conatus.models import OpenAIModel

class CoffeeShopMenuItem(TypedDict):
    name: str
    price: float

prompt = AIPrompt(
    user="generate a fake coffee shop menu",
    output_schema=list[CoffeeShopMenuItem],
)

response = OpenAIModel().call(prompt)
print(response.structured_output)  # (1)!
# > [{"name": "Latte", "price": 3.50}, ...]

Make a distinction between previous and new messages

from conatus import AIPrompt
from conatus.models import OpenAIModel, AnthropicAIModel

prompt = AIPrompt(
    user="what is the sum of 2222 and 3828?",
    system="Use the tools if necessary",
    previous_messages=[
        UserAIMessage(content="quick, give me a number"),
        AssistantAIMessage(content="1234"),
    ],
    new_messages=[
        UserAIMessage(content="what is the sum of 2222 and 3828?"),
    ],
    previous_messages_id="rs_1234567890",
)

# In this case, because OpenAI's Response API allows to send only new
# messages, we can use the previous messages ID to link to the previous
# response.
response = OpenAIModel().call(prompt)

# But in the case of Anthropic, we have to send the previous messages
# directly.
response = AnthropicAIModel().call(prompt)
PARAMETER DESCRIPTION
user

The user message. Note that if you also pass a list of messages, a ValueError will be raised.

TYPE: str | None DEFAULT: None

system

The developer (or system) message. Note that it will be ignored if messages is also passed.

TYPE: str | SystemAIMessage | None DEFAULT: None

messages

The messages. Note that it does not include the system message, which should be passed separately with the system argument.

TYPE: Iterable[ConversationAIMessage] | None DEFAULT: None

previous_messages

The previous messages, or conversation history.

TYPE: Iterable[ConversationAIMessage] | None DEFAULT: None

new_messages

The new messages.

TYPE: Iterable[ConversationAIMessage] | None DEFAULT: None

tools

The tool specifications.

TYPE: Iterable[AIToolSpecification] | None DEFAULT: None

actions

The actions. If you pass both actions and tools, both will be converted to tools.

TYPE: Iterable[RawAction] | None DEFAULT: None

computer_use_config

The configuration for the computer use mode.

TYPE: ComputerUseConfig | None DEFAULT: None

previous_messages_id

The ID of the last response.

TYPE: str | None DEFAULT: None

output_schema

The schema of the payload of the AI prompt.

TYPE: type[OutputSchemaType] | None DEFAULT: None

RAISES DESCRIPTION
ValueError

If both a prompt and a list of messages are passed, if neither a prompt nor a list of messages are passed.

Source code in conatus/models/inputs_outputs/prompt.py
def __init__(
    self,
    user: str | None = None,
    *,
    system: str | SystemAIMessage | None = None,
    messages: Iterable[ConversationAIMessage] | None = None,
    previous_messages: Iterable[ConversationAIMessage] | None = None,
    new_messages: Iterable[ConversationAIMessage] | None = None,
    tools: Iterable[AIToolSpecification] | None = None,
    actions: Iterable[RawAction] | None = None,
    computer_use_config: ComputerUseConfig | None = None,
    previous_messages_id: str | None = None,
    output_schema: type[OutputSchemaType] | None = None,
) -> None:
    """Initialize the AI prompt.

    There are three ways to initialize the AI prompt:

    1. **Simple, new conversation**: Pass a string to `user`. This will
       create a new conversation with a single message.
    2. **Conversation with multiple messages**: Pass a list of messages to
       `messages`. This will create a new conversation with the given
       messages.
    3. **Conversation with history**: You can make a distinction between
       previous and new messages. This is helpful because some AI providers
       (like OpenAI's Response API) allow to send only new messages, as
       long as you provide a previous messages ID. In this case, you need
       to pass(1) a list of `new_messages` as well as (2) a list of
       `previous_messages`, `previous_messages_id`, or both.

    In each case, you can optionally pass a system message with `system`,
    a list of tool specifications with `tools`, a list of actions with
    `actions`, and an output schema with `output_schema`.

    ## Using tools and actions

    There are two arguments here:

    * `tools`: This is a list of tool specifications, which is a data
       structure containing the name of the tool and the JSON schema of its
       specification.
    * `actions`: This is a list of actions (but it can be anything from an
       [`ActionStarterPack`][conatus.actions.starter_packs.ActionStarterPack]
       to normal functions). We handle the processing of converting these
       actions for you.

    The two arguments are not mutually exclusive. If you pass both, the
    actions will be converted to tools, and added to the list.

    ??? warning "`actions`, unlike `tools`, need to be JSON serializable"
        We assume that users that pass actions to [`AIPrompt`
        ][conatus.models.inputs_outputs.prompt.AIPrompt] are only passing
        actions that are JSON serializable. If you want to pass actions with
        variables to references, you need to manually create the
        [`AIToolSpecification`][conatus.models.inputs_outputs.prompt.AIToolSpecification]
        objects and pass them to the `tools` argument.

    ## Giving an output schema

    You can also pass an output schema with `output_schema`.
    This communicates to the AI provider that we want the response to follow
    that format.

    If such an output schema is provided, the [`BaseAIModel`
    ][conatus.models.base.BaseAIModel] will create an [`AIResponse`
    ][conatus.models.inputs_outputs.response.AIResponse] object with the
    output schema type as a generic.

    Under the hood, we use a [`TypeAdapter`][pydantic.TypeAdapter] to
    convert the output schema to a Pydantic model. Most JSON-serializable
    types (including [`TypedDict`][typing.TypedDict]s, [`BaseModel`
    ][pydantic.BaseModel]s, [`dataclass`][dataclasses.dataclass], etc.)
    should work flawlessly.

    There are limitations to what [`BaseAIModel`
    ][conatus.models.base.BaseAIModel] classes and [`AIResponse`
    ][conatus.models.inputs_outputs.response.AIResponse] deal with output
    schemas.

    # Examples

    ## Initialize from a simple string

    ```python
    from conatus import AIPrompt
    from conatus.models import OpenAIModel

    prompt = AIPrompt(
        "What did Kurt Gödel say about the US Constitution?"
    )
    response = OpenAIModel().call(prompt)
    ```

    ## Initialize from a list of messages

    ```python
    from conatus import AIPrompt
    from conatus.models import OpenAIModel
    from conatus.models.inputs_outputs.messages import (
        SystemAIMessage,
        UserAIMessage,
        AssistantAIMessage,
    )

    prompt = AIPrompt(
        system=SystemAIMessage(content="You answer very tersely."),
        messages=[
            UserAIMessage(content="quick, give me a number"),
            AssistantAIMessage(content="1234"),
        ]
    )
    response = OpenAIModel().call(prompt)
    ```

    ## Initialize with a list of actions

    ```python
    from conatus import action, AIPrompt
    from conatus.models import OpenAIModel

    @action
    def add_two_numbers(a: int, b: int) -> int:
        return a + b

    prompt = AIPrompt(
        user="what is the sum of 2222 and 3828?",
        system="Use the tools if necessary",
        actions=[add_two_numbers],
    )

    response = OpenAIModel().call(prompt)
    print(response.tool_calls)  # (1)!
    # {"name": "add_two_numbers", "arguments": {"a": 2222, "b": 3828}}
    ```

    1. The printing response will actually look a little different in
       the real world, but you get the idea.

    ## Initialize with an expected output schema

    ```python
    from typing import TypedDict
    from conatus import AIPrompt
    from conatus.models import OpenAIModel

    class CoffeeShopMenuItem(TypedDict):
        name: str
        price: float

    prompt = AIPrompt(
        user="generate a fake coffee shop menu",
        output_schema=list[CoffeeShopMenuItem],
    )

    response = OpenAIModel().call(prompt)
    print(response.structured_output)  # (1)!
    # > [{"name": "Latte", "price": 3.50}, ...]
    ```

    ## Make a distinction between previous and new messages

    ```python
    from conatus import AIPrompt
    from conatus.models import OpenAIModel, AnthropicAIModel

    prompt = AIPrompt(
        user="what is the sum of 2222 and 3828?",
        system="Use the tools if necessary",
        previous_messages=[
            UserAIMessage(content="quick, give me a number"),
            AssistantAIMessage(content="1234"),
        ],
        new_messages=[
            UserAIMessage(content="what is the sum of 2222 and 3828?"),
        ],
        previous_messages_id="rs_1234567890",
    )

    # In this case, because OpenAI's Response API allows to send only new
    # messages, we can use the previous messages ID to link to the previous
    # response.
    response = OpenAIModel().call(prompt)

    # But in the case of Anthropic, we have to send the previous messages
    # directly.
    response = AnthropicAIModel().call(prompt)
    ```

    Args:
        user: The user message. Note that if you also pass a list of
            `messages`, a [`ValueError`][ValueError] will be raised.
        system: The developer (or system) message. Note that it
            will be ignored if `messages` is also passed.
        messages: The messages. Note that it does not include the system
            message, which should be passed separately with the `system`
            argument.
        previous_messages: The previous messages, or conversation history.
        new_messages: The new messages.
        tools: The tool specifications.
        actions: The actions. If you pass both `actions` and `tools`,
            both will be converted to tools.
        computer_use_config: The configuration for the computer use mode.
        previous_messages_id: The ID of the last response.
        output_schema: The schema of the payload of the AI prompt.

    Raises:
        ValueError: If both a prompt and a list of messages are passed,
            if neither a prompt nor a list of messages are passed.
    """
    self.system_message = AIPrompt._make_system_message(system)
    self.computer_use_config = computer_use_config
    self.output_schema = output_schema
    self.output_schema_type_adapter = (
        TypeAdapter(output_schema) if output_schema else None
    )

    # Convert actions and tools
    converted_tools: list[AIToolSpecification] | None = None
    if actions is not None:
        from conatus.actions.starter_packs import convert_to_actions

        converted_actions = convert_to_actions(raw_actions=actions)
        converted_tools = AIPrompt._tools_from_actions(converted_actions)
    if tools is not None:
        converted_tools = (
            [*converted_tools, *tools] if converted_tools else list(tools)
        )
    self.tools = converted_tools

    # Handle case 1
    if user is not None:
        non_none_args = [
            (messages, "messages"),
            (new_messages, "new_messages"),
            (previous_messages, "previous_messages"),
            (previous_messages_id, "previous_messages_id"),
        ]
        self._assert_all_are_none("user", *non_none_args)
        self.new_messages = [UserAIMessage(content=user)]
        self.previous_messages, self.previous_messages_id = None, None

    # Handle case 2
    elif messages is not None:
        non_none_args = [
            (new_messages, "new_messages"),
            (previous_messages, "previous_messages"),
            (previous_messages_id, "previous_messages_id"),
        ]
        self._assert_all_are_none("messages", *non_none_args)
        self.previous_messages, self.previous_messages_id = None, None
        self.new_messages = messages

    # Handle case 3
    elif new_messages is not None:
        if previous_messages is None and previous_messages_id is None:
            msg = (
                "Passing a `new_messages` argument implies passing a "
                "`previous_messages` or `previous_messages_id` argument."
            )
            raise ValueError(msg)
        self.previous_messages = previous_messages
        self.previous_messages_id = previous_messages_id
        self.new_messages = new_messages

    # Case 4: The user does not pass anything.
    else:
        self.previous_messages, self.previous_messages_id = None, None
        self.new_messages = []

requires_stateful_api property

requires_stateful_api: bool

Whether the AI prompt requires a stateful API.

This is the case if the AI prompt requires a previous messages ID, but does not require a list of previous messages.

RETURNS DESCRIPTION
bool

Whether the AI prompt requires a stateful API.

messages property

The messages of the AI prompt.

Note that it does not include the system message.

system_message_as_str property

system_message_as_str: str | None

The text of the system message.

RETURNS DESCRIPTION
str | None

The text of the system message.

all_text property

all_text: str

Get all the text from the AI prompt.

RETURNS DESCRIPTION
str

The text from the AI prompt.

tools_to_markdown

tools_to_markdown() -> str

Get the tools of the AI prompt in markdown format.

RETURNS DESCRIPTION
str

The tools of the AI prompt in markdown format.

Source code in conatus/models/inputs_outputs/prompt.py
def tools_to_markdown(self) -> str:
    """Get the tools of the AI prompt in markdown format.

    Returns:
        The tools of the AI prompt in markdown format.
    """
    if self.tools is None:
        return ""
    return "\n\n## Tools\n\n" + "\n".join(
        f"### {tool.name}\n\n```json\n"
        + str(tool.json_schema_pydantic_model.model_json_schema())
        + "\n```\n\n"
        for tool in self.tools
    )

output_schema_to_markdown

output_schema_to_markdown() -> str

Get the output schema of the AI prompt in markdown format.

RETURNS DESCRIPTION
str

The output schema of the AI prompt in markdown format.

Source code in conatus/models/inputs_outputs/prompt.py
def output_schema_to_markdown(self) -> str:
    """Get the output schema of the AI prompt in markdown format.

    Returns:
        The output schema of the AI prompt in markdown format.
    """
    if self.output_schema_type_adapter is None:
        return ""
    return "\n\n## Output schema\n\n" + str(
        self.output_schema_type_adapter.json_schema()
    )

to_markdown

to_markdown() -> str

Get the text of the AI prompt.

The expected output is a markdown formatted string, with the following structure:

# AI prompt

## Developer message
<text>
...

## User message
<text>
...
RETURNS DESCRIPTION
str

The text of the AI prompt.

Source code in conatus/models/inputs_outputs/prompt.py
def to_markdown(self) -> str:
    """Get the text of the AI prompt.

    The expected output is a markdown formatted string, with the
    following structure:

    ```txt
    # AI prompt

    ## Developer message
    <text>
    ...

    ## User message
    <text>
    ...
    ```

    Returns:
        The text of the AI prompt.
    """
    messages = "\n\n".join(
        message.to_markdown() for message in self.messages
    )
    maybe_system_message = (
        self.system_message.to_markdown()
        if self.system_message
        else "<No system message>"
    ) + "\n\n"
    return (
        "# AI prompt\n\n"
        + maybe_system_message
        + messages
        + self.tools_to_markdown()
        + self.output_schema_to_markdown()
    )

remove_image_content_parts staticmethod

remove_image_content_parts(
    messages: Iterable[ConversationAIMessage],
) -> list[ConversationAIMessage]

Remove the image content parts from the messages.

RETURNS DESCRIPTION
list[ConversationAIMessage]

The list of messages without image content parts.

Source code in conatus/models/inputs_outputs/prompt.py
@staticmethod
def remove_image_content_parts(
    messages: Iterable[ConversationAIMessage],
) -> list[ConversationAIMessage]:
    """Remove the image content parts from the messages.

    Returns:
        The list of messages without image content parts.
    """
    new_messages: list[ConversationAIMessage] = []
    for message in messages:
        # We only care about situations where the message is a user message
        # with content parts.
        if not isinstance(message, UserAIMessage) or isinstance(
            message.content, str
        ):
            new_messages.append(message)
        else:
            # We skip the image content parts or text parts that are linked
            # to an image.
            content_parts: list[UserAIMessageContentPart] = [
                content_part
                for content_part in message.content
                if not (
                    isinstance(content_part, UserAIMessageContentImagePart)
                    or content_part.linked_to_image
                )
            ]
            if len(content_parts) > 0:
                new_messages.append(
                    UserAIMessage(content=content_parts, role=message.role)
                )
    return new_messages

conatus.models.inputs_outputs.prompt.AIToolSpecification dataclass

AIToolSpecification(
    name: str,
    strict_mode: bool,
    json_schema_pydantic_model: type[BaseModel],
)

Tool specification to be passed to the AI.

name instance-attribute

name: str

The name of the tool.

strict_mode instance-attribute

strict_mode: bool

Whether the tool can be represented through a strict JSON schema mode.

json_schema_pydantic_model instance-attribute

json_schema_pydantic_model: type[BaseModel]

The JSON schema of the tool, which is a Pydantic model.

json_schema property

json_schema: type[BaseModel]

The JSON schema of the tool.

Alias to the json_schema_pydantic_model property.

RETURNS DESCRIPTION
type[BaseModel]

The JSON schema of the tool.

__eq__

__eq__(other: object) -> bool

Check if the tool specification is equal to another object.

RETURNS DESCRIPTION
bool

Whether the tool specification is equal to the other object.

Source code in conatus/models/inputs_outputs/prompt.py
@override
def __eq__(self, other: object) -> bool:
    """Check if the tool specification is equal to another object.

    Returns:
        Whether the tool specification is equal to the other object.
    """
    if not isinstance(other, AIToolSpecification):
        return False
    return (
        self.name == other.name
        and self.strict_mode == other.strict_mode
        and self.json_schema_pydantic_model.model_json_schema()
        == other.json_schema_pydantic_model.model_json_schema()
    )

__hash__

__hash__() -> int

Hash the tool specification.

RETURNS DESCRIPTION
int

The hash of the tool specification.

Source code in conatus/models/inputs_outputs/prompt.py
@override
def __hash__(self) -> int:
    """Hash the tool specification.

    Returns:
        The hash of the tool specification.
    """
    return (
        hash(self.name)
        ^ hash(str(self.json_schema_pydantic_model.model_json_schema()))
        ^ (1 if self.strict_mode else 0)
    )

conatus.models.inputs_outputs.response.AIResponse dataclass

AIResponse(
    prompt: AIPrompt[OutputSchemaType],
    message_received: AssistantAIMessage,
    structured_output: OutputSchemaType | None = None,
    finish_reason: FinishReasons | None = None,
    usage: CompletionUsage | None = None,
    uid: str | None = None,
    output_schema_was_converted_to_item_object: bool = False,
)

Bases: Generic[OutputSchemaType]

Base structure for the AI response.

This is the data structure meant for later processing.

Note the new_conversation_history method, which adds the message_received to the messages_sent field.

prompt instance-attribute

The prompt that was used to create the response.

This is the same as the AIPrompt.prompt field in the AIPrompt that was used to create the response. This is purely a convenience property.

message_received instance-attribute

message_received: AssistantAIMessage

The message received from the AI.

Note that, for now, we only pick one 'choice' from the AI response. Some APIs offer the ability to look at multiple choices, but we do not support that yet.

structured_output class-attribute instance-attribute

structured_output: OutputSchemaType | None = None

The structured output of the AI response, if any.

It can have three possible values:

  • None: No structured output was found.
  • OutputSchemaType : The structured output was found in the text of the message received.
  • str: No output schema was provided, so we just return the text of the message received.

If you want to get a result without that returns either the structured output or the text of the message received, you can use the result property.

finish_reason class-attribute instance-attribute

finish_reason: FinishReasons | None = None

The reason the model stopped generating tokens.

usage class-attribute instance-attribute

usage: CompletionUsage | None = None

The usage statistics of the AI response.

uid class-attribute instance-attribute

uid: str | None = None

The unique identifier of the AI response, as given by the AI provider.

output_schema_was_converted_to_item_object class-attribute instance-attribute

output_schema_was_converted_to_item_object: bool = False

Whether the output schema was converted to an item object.

messages_sent property

messages_sent: list[AIMessage]

The messages sent to the AI.

This is the same as the AIPrompt.messages field in the AIPrompt that was used to create the response. This is purely a convenience property.

tool_calls property

Get all the tool calls from the assistant message.

This is different from the tool_call_content_parts property, which is implemented in some of our subclasses, and which returns all the tool calls in our internal representation. This instead returns each tool call under the format (tool_name, {arg_name: arg_value}).

RETURNS DESCRIPTION
list[AIToolCall | ComputerUseAction]

The tool calls.

tool_call_content_parts_local_execution property

tool_call_content_parts_local_execution: list[
    AIToolCall | ComputerUseAction
]

Get all the tool calls requiring local execution.

This is different from the tool_call_content_parts property, which returns all the tool calls in our internal representation. This instead returns each tool call under the format (tool_name, {arg_name: arg_value}).

RETURNS DESCRIPTION
list[AIToolCall | ComputerUseAction]

The tool calls.

code_snippets property

code_snippets: list[str]

Get all the code snippets from the assistant message.

We filter out the code snippets that are not Python (we do assume that if the language is not specified, it is Python).

RETURNS DESCRIPTION
list[str]

The code snippets.

conversation_history property

conversation_history: list[ConversationAIMessage]

The conversation history, including prompt and message received.

all_text property

all_text: str | None

Get all the text from the assistant message.

Note that this does not include the reasoning.

RETURNS DESCRIPTION
str | None

The text from the assistant message.

all_text_including_reasoning property

all_text_including_reasoning: str | None

Get all the text from the assistant message, including the reasoning.

RETURNS DESCRIPTION
str | None

The text from the assistant message.

cost property

cost: float

Get the cost of the AI response.

Note that if we cannot retrieve the total cost (for whatever reason), we return -1.

result property

Get the result of the AI response.

Unlike the structured_output attribute, this property will never return None. If the structured output is not available, we return the text of the message received.

RETURNS DESCRIPTION
OutputSchemaType | str

The result of the AI response.

__post_init__

__post_init__() -> None

Post-initialization hook.

This is where we validate the structured output.

RAISES DESCRIPTION
ValueError

If the structured output is None after validation. This should not happen.

Source code in conatus/models/inputs_outputs/response.py
def __post_init__(self: AIResponse[OutputSchemaType]) -> None:
    """Post-initialization hook.

    This is where we validate the structured output.

    Raises:
        ValueError: If the structured output is `None` after validation.
            This should not happen.
    """
    # No output schema, the structured output is the text of the message
    # received.
    if self.prompt.output_schema is None:
        self.structured_output = cast(
            "OutputSchemaType", self.message_received.all_text
        )
        return

    # If we encounter this flag, it means that the output schema was
    # converted to an item object.
    # See `conatus.actions.json_schema.generate_openai_json_schema`.
    if self.output_schema_was_converted_to_item_object:

        class ItemObject(TypedDict):
            item: self.prompt.output_schema  # type: ignore[name-defined] # pyright: ignore[reportUnknownMemberType, reportInvalidTypeForm]

        type_adapter = TypeAdapter(ItemObject)

    else:
        type_adapter = TypeAdapter(self.prompt.output_schema)

    # The structured output can come either from the text of the message
    # received or from the tool calls.
    candidate_outputs = [
        self.message_received.all_text,
        *(
            tool_call.arguments_as_str
            for tool_call in self.tool_calls
            if (
                isinstance(tool_call, AIToolCall)
                and tool_call.could_be_structured_output
            )
        ),
    ]

    validation_was_successful = False
    structured_output: ItemObject | OutputSchemaType | None = None
    for candidate_output in candidate_outputs:
        try:
            output_as_dict = try_parse_json_arguments(
                candidate_output,
                suppress_warnings=True,
                suppress_errors=True,
            )
            structured_output = type_adapter.validate_python(output_as_dict)
            validation_was_successful = True
            break
        except (ValidationError, json.JSONDecodeError, SyntaxError):
            continue

    if not validation_was_successful:
        self.structured_output = None
        return

    if structured_output is None:
        msg = "The structured output was None after validation."
        raise ValueError(msg)

    if self.output_schema_was_converted_to_item_object:
        structured_output = cast("ItemObject", structured_output)
        self.structured_output = structured_output["item"]
    else:
        self.structured_output = cast("OutputSchemaType", structured_output)

new_conversation_history

new_conversation_history(
    *, add_tool_response_if_tool_call: bool = False
) -> Iterable[AIMessage]

Generate a new conversation history.

Convenience method that adds the message_received to the messages_sent field.

If you want to add a ToolResponseAIMessage after the message_received if the AI called a tool, you can set the add_tool_response_if_tool_call argument to True.

PARAMETER DESCRIPTION
add_tool_response_if_tool_call

If set to True, and if the AI called a tool in the last message, we will add an ToolResponseAIMessage with content {"success": true} at the end of the new conversation history.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Iterable[AIMessage]

The new conversation history.

Source code in conatus/models/inputs_outputs/response.py
def new_conversation_history(
    self,
    *,
    add_tool_response_if_tool_call: bool = False,
) -> Iterable[AIMessage]:
    """Generate a new conversation history.

    Convenience method that adds the [`message_received`
    ][conatus.models.inputs_outputs.response.AIResponse.message_received]
    to the [`messages_sent`
    ][conatus.models.inputs_outputs.response.AIResponse.messages_sent]
    field.

    If you want to add a [`ToolResponseAIMessage`
    ][conatus.models.inputs_outputs.messages.ToolResponseAIMessage] after
    the [`message_received`
    ][conatus.models.inputs_outputs.response.AIResponse.message_received]
    if the AI called a tool, you can set the
    `add_tool_response_if_tool_call` argument to `True`.

    Args:
        add_tool_response_if_tool_call (bool): If set to `True`, and
            if the AI called a tool in the last message, we will add an
            [`ToolResponseAIMessage`
            ][conatus.models.inputs_outputs.messages.ToolResponseAIMessage]
            with
            content `{"success": true}` at the end of the new conversation
            history.

    Returns:
        The new conversation history.
    """
    tool_response_messages: list[ToolResponseAIMessage] = []
    if add_tool_response_if_tool_call:
        tool_response_messages.extend(
            ToolResponseAIMessage(
                content={"success": True},
                tool_name=part.tool_call.name,
                tool_call_id=part.tool_call.call_id,
                success=True,
            )
            for part in self.message_received.content
            if isinstance(part, AssistantAIMessageContentToolCallPart)
        )
    return [
        *self.messages_sent,
        self.message_received,
        *tool_response_messages,
    ]

to_markdown

to_markdown(*, include_prompt: bool = True) -> str

Get the text of the AI response.

PARAMETER DESCRIPTION
include_prompt

Whether to include the prompt in the output.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
str

The text of the AI response.

Source code in conatus/models/inputs_outputs/response.py
def to_markdown(
    self,
    *,
    include_prompt: bool = True,
) -> str:
    """Get the text of the AI response.

    Args:
        include_prompt: Whether to include the prompt in the output.

    Returns:
        The text of the AI response.
    """
    prompt_text = (
        self.prompt.to_markdown() + "\n\n" + "# Message received\n\n"
    )
    response_text = self.message_received.to_markdown()
    return prompt_text + response_text if include_prompt else response_text

conatus.models.inputs_outputs.response.FinishReasons module-attribute

FinishReasons = Literal[
    "stop",
    "length",
    "tool_calls",
    "content_filter",
    "timeout",
]

Possible reasons for a model to finish generating tokens.

The precise wording varies from provider to provider, so this is meant to be a common interface.

conatus.models.inputs_outputs.prompt.OutputSchemaType module-attribute

OutputSchemaType = TypeVar('OutputSchemaType', default=str)

The type of the payload of the AI prompt. Used for structured outputs.

Default to str because we can always extract the text from the response.