Base AI interface¶
For developers only
This section is addressed to developers who want to extend the
BaseAIInterface
class.
End users should use classes such as PlanningAIInterface
or
ExecutionAIInterface
(see
AI interfaces).
The Base AI Interface "contract"¶
The general idea of an AI interface is to abstract the conversation between the agent and the LLM. You give it a list of messages, and it will return a payload object, which contains the response from the LLM, the cost, the finish reason, and potentially a structured output. This is useful if you want to create a multi-step agent: you can have an AI interface for the planning step, and another one for the execution step.
Anatomy of the run method¶
The BaseAIInterface
class exposes two methods:
run: This method takes a list ofConversationAIMessageobjects, and returns anAIInterfacePayloadobject.arun: This method is the asynchronous version of therunmethod. It takes the same arguments as therunmethod.
In practice, these two functions essentially follow a loop with (roughly) the following steps:
- The interface makes a prompt with
make_prompt.- If this is the first turn of the conversation, the interface will use the
make_first_promptmethod. - Otherwise, it will use the
make_new_promptmethod.
- If this is the first turn of the conversation, the interface will use the
- The interface calls the model with the prompt, and gets a response.
- That response is sent to the
should_continuemethod to see if the conversation should continue. - If it should, the interface creates new conversation messages with
make_new_messages. This is particularly useful if you need to generate tool response messages. These new messages are added to the conversation history, and we are back to step 1. - If the conversation should not continue, the interface creates a payload
with the response, the cost, and the finish reason.
It uses the
extract_resultmethod to extract the structured output from the response, if any.
The methods to override¶
As you can see, as a developer, you can choose to override the following methods:
make_prompt: This method is tasked with assembling the prompt for the AI model.make_first_prompt: This method is tasked with assembling the prompt for the first turn of the conversation.make_new_prompt: This method is tasked with assembling the prompt for the new turn of the conversation.make_new_messages: This method is tasked with assembling the new conversation messages.should_continue: This method is tasked with deciding whether the conversation should continue.extract_result: This method is tasked with extracting the structured output from the response.
Implement make_prompt or make_first_prompt + make_new_prompt
make_prompt
, if not
overridden, will call either make_first_prompt
or
make_new_prompt,
depending on whether this is the first turn of the conversation or not. By
default, these two methods raise a NotImplementedError
.
This means you have a choice:
- Implement
make_promptand leave the other methods empty. This means that you will have to determine whether this is the first turn of the conversation or not, and assemble the prompt accordingly. - Implement
make_first_promptandmake_new_promptand leavemake_promptempty.
The AI Interface Payload¶
The AIInterfacePayload
class is the class that contains the response from the LLM, the cost, and the
finish reason. Through its result
attribute, it can also contain a specific structured output schema.
conatus.agents.ai_interfaces.base.AIInterfacePayload
dataclass
¶
AIInterfacePayload(
cost: float,
finish_reason: AIInterfaceRunFinishReason | None,
result: Result,
response: AIResponse[Result] | AIResponse,
state: RuntimeState,
)
Bases: Generic[Result]
The payload of a run of a simple AI interface.
finish_reason
instance-attribute
¶
The reason the run finished.
response
instance-attribute
¶
response: AIResponse[Result] | AIResponse
The response from the AI interface.
The AI Interface base class¶
conatus.agents.ai_interfaces.base.BaseAIInterface
¶
BaseAIInterface(
*,
task_config: ConsolidatedTaskConfig | None = None,
run_writer: FileWriter | None = None,
model_config: ModelConfig | ModelConfigTD | None = None,
model_type: ModelType | None = None,
interface_name: str | None = None,
max_turns: int | None = None,
runtime: Runtime | None = None,
hide_runtime_from_ai: bool | None = None,
actions: (
Collection[Action]
| Sequence[RawAction]
| ActionStarterPack
| None
) = None,
computer_use_mode: bool | None = None,
only_keep_one_computer_use_environment: (
bool | None
) = None,
stop_if_no_tool_calls: bool = True,
**kwargs: ParamType
)
Bases: Generic[Result]
Base class for AI interfaces that are not linked to tasks or runtimes.
| PARAMETER | DESCRIPTION |
|---|---|
task_config
|
The task configuration of the agent.
TYPE:
|
run_writer
|
The writer used to log run information.
TYPE:
|
model_config
|
The configuration for the model.
TYPE:
|
model_type
|
The type of model to use for the AI interface. If
TYPE:
|
interface_name
|
The name of the AI interface. Will otherwise be the snake case of the class name.
TYPE:
|
max_turns
|
The maximum number of turns the AI interface can take. Defaults to 25, or whatever is the class attribute.
TYPE:
|
runtime
|
The runtime of the agent.
TYPE:
|
hide_runtime_from_ai
|
Whether to hide the runtime from the AI.
Defaults to
TYPE:
|
actions
|
The actions of the agent. Note that this is only used if the runtime is not provided.
TYPE:
|
computer_use_mode
|
Whether to use computer use mode. Defaults to
TYPE:
|
only_keep_one_computer_use_environment
|
Whether to only keep one
computer use environment. Defaults to
TYPE:
|
stop_if_no_tool_calls
|
Whether to stop the run if no tool calls are
made. Defaults to
TYPE:
|
kwargs
|
Additional parameters for the AI interface.
TYPE:
|
Source code in conatus/agents/ai_interfaces/base.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 | |
stream
instance-attribute
¶
stream: bool = (
cls_stream_val if cls_stream_val is not None else stream
)
Whether the AI interface streams the response.
If True, the AI interface will stream the response to the user.
model
instance-attribute
¶
model: BaseAIModel = retrieve_model(
task_config=task_config,
model_config=model_config,
model_type=model_type,
)
The model to use for the AI interface.
run_writer
instance-attribute
¶
run_writer: FileWriter | None = run_writer
The writer to use for the AI interface.
If None, no logging will be done.
interface_name
instance-attribute
¶
interface_name: str = (
interface_name
or getattr(self, "interface_name", None)
or to_snake_case(__name__)
)
The name of the AI interface as written in the log files.
hide_runtime_from_ai
instance-attribute
¶
hide_runtime_from_ai: bool = (
hide_runtime_from_ai
if hide_runtime_from_ai is not None
else getattr(self, "hide_runtime_from_ai", True)
)
Whether to hide the runtime from the AI.
If True, the runtime will not be shown to the AI.
runtime
instance-attribute
¶
runtime: Runtime = runtime or Runtime(
actions=actions or [], hide_from_ai=hide_runtime_from_ai
)
The runtime of the agent.
prompt_history
instance-attribute
¶
The history of prompts, which can be used to track the conversation.
response_history
instance-attribute
¶
response_history: list[AIResponse[Result] | AIResponse] = []
The history of responses, which can be used to track the conversation.
max_turns
instance-attribute
¶
The maximum number of turns the AI interface can take.
Defaults to 25, but can be overridden by the subclass or the user in the constructor.
model_type
instance-attribute
¶
model_type: ModelType | None = model_type
The type of model to use for the AI interface.
If None, the model type will be inferred from the model class.
finish_reason
instance-attribute
¶
The reason the run finished.
If None, the run is still ongoing.
conversation_history
instance-attribute
¶
conversation_history: list[ConversationAIMessage] = []
The history of the conversation.
Note that this can include messages generated outside of the instance, if the user wants to pass them to the AI interface.
conversation_history_id
instance-attribute
¶
conversation_history_id: str | None = None
The ID of the conversation history.
This is used to link the conversation history to the response. Useful when calling stateful AI interfaces.
system_message
instance-attribute
¶
system_message: SystemAIMessage | None = None
The system message of the conversation.
If the user passes a system message with run
, it will be
stored here. At every turn of the conversation, the system message will be
updated based on what's in the generated prompt.
computer_use_mode
instance-attribute
¶
computer_use_mode: bool = (
computer_use_mode
if computer_use_mode is not None
else getattr(self, "computer_use_mode", False)
)
Whether the AI interface uses computer use actions.
If this is True, the interface will do the following:
- It will only show the tool calls that are compatible with the computer-use action.
- It will use a computer use model (
openai:computer-use-previewfor now). - It will strive to keep the reasoning traces and pass it back to the conversation.
only_keep_one_computer_use_environment
instance-attribute
¶
only_keep_one_computer_use_environment: bool = (
only_keep_one_computer_use_environment
if only_keep_one_computer_use_environment is not None
else getattr(
self, "only_keep_one_computer_use_environment", True
)
)
Whether to only keep one computer use environment.
If this is True, the interface will only keep one computer use
environment at a time. This means that, if for example, multiple browsers
are open, the interface will only use the one that was created last.
stop_if_no_tool_calls
instance-attribute
¶
stop_if_no_tool_calls: bool = stop_if_no_tool_calls
Whether to stop the run if no tool calls are made.
If True, the run will stop if no tool calls are made. Otherwise, we send
an empty tool response to the AI.
latest_response
property
¶
latest_response: AIResponse[Result] | AIResponse | None
The latest response.
retrieve_model
staticmethod
¶
retrieve_model(
task_config: ConsolidatedTaskConfig,
model_config: ModelConfig | ModelConfigTD | None = None,
model_type: ModelType | None = None,
) -> BaseAIModel
Retrieve the model for the AI interface.
But a user may want to override this in the TaskConfig
object, so that they can
specify a different model for the AI interface.
This method is tasked with retrieving the model, using the following order of precedence:
- The model class retrieved from
task_config.preferred_model. - The model class retrieved from
task_config.preferred_provider.
It will then instantiate the model with the given configuration.
| PARAMETER | DESCRIPTION |
|---|---|
task_config
|
The task configuration of the agent.
TYPE:
|
model_config
|
The configuration for the model.
TYPE:
|
model_type
|
The type of model to use. This is useful if you need to differentiate between different types of models. Ignored if the user provides a preferred model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
BaseAIModel
|
The model to use for the AI interface. |
Source code in conatus/agents/ai_interfaces/base.py
make_prompt
¶
make_prompt(
*,
conversation_history: Collection[
ConversationAIMessage
] = (),
conversation_history_id: str | None = None,
conversation_history_system_message: (
SystemAIMessage | None
) = None,
previous_response: (
AIResponse[Result] | AIResponse | None
) = None,
new_messages: list[ConversationAIMessage] | None = None
) -> AIPrompt | AIPrompt[Result]
Make the prompt for the AI interface.
You should override this method. If you don't, you should then
override the make_first_prompt
and make_new_prompt
methods.
| PARAMETER | DESCRIPTION |
|---|---|
conversation_history
|
The conversation history.
TYPE:
|
conversation_history_id
|
The ID of the conversation history.
TYPE:
|
conversation_history_system_message
|
The system message of the conversation history.
TYPE:
|
previous_response
|
The previous response from the AI. Normally, if you just care about the conversation history, you can ignore this. It's useful to retrieve things like structured outputs, cost, etc.
TYPE:
|
new_messages
|
The new messages to send to the AI. Similarly, they should be naturally be added to the conversation history, but if there is specific processing you need to do, that variable allows you to distinguish them specifically.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIPrompt | AIPrompt[Result]
|
The prompt for the AI interface. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If no previous response is found. |
Source code in conatus/agents/ai_interfaces/base.py
make_first_prompt
¶
make_first_prompt(
*,
conversation_history: list[ConversationAIMessage],
conversation_history_id: str | None,
conversation_history_system_message: (
SystemAIMessage | None
)
) -> AIPrompt | AIPrompt[Result]
Make the first prompt for the AI interface.
This method is used to make the first prompt for the AI interface.
It is called by the BaseAIInterface.run
method.
| PARAMETER | DESCRIPTION |
|---|---|
conversation_history
|
The conversation history.
TYPE:
|
conversation_history_id
|
The ID of the conversation history.
TYPE:
|
conversation_history_system_message
|
The system message of the conversation history.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIPrompt | AIPrompt[Result]
|
The first prompt for the AI interface. |
Source code in conatus/agents/ai_interfaces/base.py
make_new_prompt
¶
make_new_prompt(
*,
conversation_history: list[ConversationAIMessage],
conversation_history_id: str | None,
conversation_history_system_message: (
SystemAIMessage | None
),
previous_response: AIResponse[Result] | AIResponse,
new_messages: list[ConversationAIMessage]
) -> AIPrompt | AIPrompt[Result]
Make a new prompt for the AI interface.
This method is used to make a new prompt for the AI interface.
It is called by the BaseAIInterface.run
method.
| PARAMETER | DESCRIPTION |
|---|---|
conversation_history
|
The conversation history.
TYPE:
|
conversation_history_id
|
The ID of the conversation history.
TYPE:
|
conversation_history_system_message
|
The system message of the conversation history.
TYPE:
|
previous_response
|
The previous response received from the AI. Normally, if you just care about the conversation history, you can ignore this. It's useful to retrieve things like structured outputs, cost, etc.
TYPE:
|
new_messages
|
The new messages to send to the AI. Similarly, they should be naturally be added to the conversation history, but if there is specific processing you need to do, that variable allows you to distinguish them specifically.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIPrompt | AIPrompt[Result]
|
The new prompt for the AI interface. |
Source code in conatus/agents/ai_interfaces/base.py
make_new_messages
async
¶
make_new_messages(
response: AIResponse[Result] | AIResponse,
) -> list[ConversationAIMessage]
Make the new messages after a turn of the conversation.
This is particularly useful when dealing with a Runtime
, as it allows the interface to generate
tool response messages.
| PARAMETER | DESCRIPTION |
|---|---|
response
|
The response from the AI.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[ConversationAIMessage]
|
The new messages after a turn of the conversation. (Do not include the response from the AI, as it will be automatically added to the conversation history.) |
Source code in conatus/agents/ai_interfaces/base.py
extract_result
¶
extract_result(
*,
latest_response: AIResponse[Result] | AIResponse,
latest_prompt: AIPrompt[Result] | AIPrompt,
finish_reason: AIInterfaceRunFinishReason | None
) -> Result
Extract the result of the AI interface.
| PARAMETER | DESCRIPTION |
|---|---|
latest_response
|
The latest response received from the AI.
TYPE:
|
latest_prompt
|
The latest prompt sent to the AI. |
finish_reason
|
The reason the run finished. Might be
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Result
|
The processed result. |
Source code in conatus/agents/ai_interfaces/base.py
should_continue
¶
should_continue(
response: AIResponse[Result] | AIResponse,
) -> bool
Whether the AI interface should continue.
By default, the AI interface will stop after twenty-five AI model calls, but this method can be overridden to change this behavior.
| PARAMETER | DESCRIPTION |
|---|---|
response
|
The response from the AI.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether the AI interface should continue. |
Source code in conatus/agents/ai_interfaces/base.py
run
¶
run(
*,
conversation_history: Collection[
ConversationAIMessage
] = (),
conversation_history_id: str | None = None,
conversation_history_system_message: (
SystemAIMessage | None
) = None
) -> AIInterfacePayload[Result]
Run the AI interface.
This method abstracts away the logic of interacting with the
AI models, and returns a
AIInterfacePayload
from which the Agent can extract
the result, the cost of the interactions, etc.
If the user wishes to include a previous conversation, they can
provide a list of ConversationAIMessage
objects, and / or a string that will identify the conversation.
| PARAMETER | DESCRIPTION |
|---|---|
conversation_history
|
The conversation history, as a list of
TYPE:
|
conversation_history_id
|
The ID of the conversation history. This should be used when dealing with stateful AI interfaces.
TYPE:
|
conversation_history_system_message
|
The system message of the conversation history.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIInterfacePayload[Result]
|
The payload of the run. |
Source code in conatus/agents/ai_interfaces/base.py
arun
async
¶
arun(
conversation_history: Collection[
ConversationAIMessage
] = (),
conversation_history_id: str | None = None,
conversation_history_system_message: (
SystemAIMessage | None
) = None,
) -> AIInterfacePayload[Result]
Run the AI interface asynchronously.
For more information, see BaseAIInterface.run
.
| PARAMETER | DESCRIPTION |
|---|---|
conversation_history
|
The conversation history, as a list of
TYPE:
|
conversation_history_id
|
The ID of the conversation history. This should be used when dealing with stateful AI interfaces.
TYPE:
|
conversation_history_system_message
|
The system message of the conversation history.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIInterfacePayload[Result]
|
The response from the AI interface. |
Source code in conatus/agents/ai_interfaces/base.py
get_tool_specifications
¶
get_tool_specifications(
*, computer_use_mode: bool | None = None
) -> list[AIToolSpecification]
Get the tool specifications for the agent.
The tool specifications are extracted from the available actions in the runtime.
Note that if computer_use_mode is True, the tool specifications
skip actions that conflict with the computer use mode, such as
browser actions.
| PARAMETER | DESCRIPTION |
|---|---|
computer_use_mode
|
Whether to include the computer use actions.
If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[AIToolSpecification]
|
A list of tool specifications. |
Source code in conatus/agents/ai_interfaces/base.py
generate_prompt_response_callbacks
¶
generate_prompt_response_callbacks(
*, response_callback_expects_chunks: bool = False
) -> tuple[Callable[[str], None], Callable[[str], None]]
Generate the callbacks for the prompt and response.
| PARAMETER | DESCRIPTION |
|---|---|
response_callback_expects_chunks
|
Whether the response callback expects chunks.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Callable[[str], None]
|
The callback for the prompt. |
Callable[[str], None]
|
The callback for the response. If
|
Source code in conatus/agents/ai_interfaces/base.py
Hooking up the AI interface to a Task¶
So far, we have seen the BaseAIInterface
class, which is the base class for all AI interfaces.
The BaseAIInterfaceWithTask
class is the base class for all AI interfaces that are linked to a
Task.
There are a few differences between the two classes:
- The
BaseAIInterfaceWithTaskexpects aTaskDefinitionand aTaskConfigobject. - It exposes a long list of convenience methods to generate XML-like prompts. This is very useful to communicate with the LLM about the task and the variables. (See more below.)
- While the
BaseAIInterfacewill stop the run if no tool calls are made, theBaseAIInterfaceWithTaskwill not stop the run by default, but essentially coerce the AI to properly call theterminateaction.
XML Convenience methods¶
The BaseAIInterfaceWithTask
class provides a few convenience methods to generate XML-like prompts.
Not necessarily valid XML
These methods are not guaranteed to return valid XML. They are meant to be used as a guide to assemble the prompt.
These methods are:
get_docstrings_for_all_actions_xml: Get the docstrings for all actions in XML format.get_task_inputs_xml: Get the task inputs in XML format.get_task_outputs_xml: Get the task outputs in XML format.get_task_description_xml: Get the task description in XML format.get_task_definition_xml: Get the task definition in XML format (which is really just the task description, the task inputs, and the task outputs).get_all_variables_xml: Get all variables in XML format.get_steps_so_far_xml: Get the steps so far in XML format (including pseudocode).get_last_step_xml: Get the last step in XML format.
You can use the custom_xml
method to generate XML-like prompts for your own needs.
Other convenience methods¶
Other convenience methods are:
get_one_variable_repr: Get a string representation of a single variable.get_tool_specifications: Get the tool specifications from theRuntime, with some filtering ifcomputer_use_modeis enabled.
conatus.agents.ai_interfaces.base.BaseAIInterfaceWithTask
¶
BaseAIInterfaceWithTask(
*,
runtime: Runtime,
task_definition: TaskDefinition,
task_config: ConsolidatedTaskConfig,
run_writer: FileWriter | None = None,
model_config: ModelConfig | ModelConfigTD | None = None,
model_type: ModelType | None = None,
computer_use_mode: bool | None = None,
only_keep_one_computer_use_environment: (
bool | None
) = None,
interface_name: str | None = None,
max_turns: int | None = None,
stop_if_no_tool_calls: bool = False,
**kwargs: ParamType
)
Bases: BaseAIInterface[Result], ABC
Base class for AI interfaces that are linked to tasks.
| PARAMETER | DESCRIPTION |
|---|---|
runtime
|
The runtime state of the agent.
TYPE:
|
task_definition
|
The task definition of the agent.
TYPE:
|
task_config
|
The task configuration of the agent.
TYPE:
|
run_writer
|
The writer used to log run information. If
TYPE:
|
model_config
|
The configuration for the model.
TYPE:
|
model_type
|
The type of model to use for the AI interface. If
TYPE:
|
computer_use_mode
|
Whether to use computer use mode. Defaults to
TYPE:
|
only_keep_one_computer_use_environment
|
Whether to only keep one
computer use environment. Defaults to
TYPE:
|
interface_name
|
The name of the AI interface. Will otherwise be the snake case of the class name.
TYPE:
|
max_turns
|
The maximum number of turns the AI interface can take. Defaults to 1, but can be overridden by the subclass or the user in the constructor.
TYPE:
|
stop_if_no_tool_calls
|
Whether to stop the run if no tool calls are
made. Defaults to
TYPE:
|
kwargs
|
Additional parameters for the AI interface.
TYPE:
|
Source code in conatus/agents/ai_interfaces/base.py
task_definition
instance-attribute
¶
task_definition: TaskDefinition = task_definition
The task definition of the agent.
task_config
instance-attribute
¶
task_config: ConsolidatedTaskConfig = task_config
The task configuration of the agent.
filter_modified_variables
¶
filter_modified_variables(
new_messages: list[ConversationAIMessage] | None = None,
) -> list[str]
From a list of messages, extract the names of the modified variables.
| PARAMETER | DESCRIPTION |
|---|---|
new_messages
|
The new messages.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[str]
|
The names of the modified variables. |
Source code in conatus/agents/ai_interfaces/base.py
get_docstrings_for_all_actions_xml
¶
Get the docstrings for the task's actions in XML format.
You should retrieve something of the like:
<actions>
Here all the actions that can be used to perform the task.
[... More instructions ...]
Name: 'print_hello'
Description
-----------
Print hello.
Parameters
----------
user_name : str
The name of the user to print hello to.
[... More actions ...]
</actions>
| PARAMETER | DESCRIPTION |
|---|---|
computer_use_mode
|
Whether to include the computer use actions.
If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The docstrings for the actions. |
Source code in conatus/agents/ai_interfaces/base.py
get_task_inputs_xml
¶
get_task_inputs_xml() -> str
Get the task definition inputs in XML format.
Note that this is not the inputs of the task, but the expected
inputs of the task. In other words, we're not displaying the actual
inputs of the task, but the inputs that the task expects. The actual
inputs should be displayed in the variables section.
You should retrieve something of the like:
<task_inputs>
* 'user_name' of type '<class 'str'>' (The name of the user)
[... More inputs ...]
</task_inputs>
| RETURNS | DESCRIPTION |
|---|---|
str
|
The task inputs in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_starting_variables_xml
¶
get_starting_variables_xml(
*, include_images: Literal[True]
) -> list[UserAIMessageContentPart]
get_starting_variables_xml(
*, include_images: bool = False
) -> str | list[UserAIMessageContentPart]
Get the task definition inputs in XML format.
Note that unlike get_task_inputs_xml
this method will also include the starting variables in the
representation.
| RETURNS | DESCRIPTION |
|---|---|
str | list[UserAIMessageContentPart]
|
The task inputs in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_task_outputs_xml
¶
get_task_outputs_xml() -> str
Get the task definition outputs in XML format.
You should retrieve something of the like:
| RETURNS | DESCRIPTION |
|---|---|
str
|
The outputs for the task. |
Source code in conatus/agents/ai_interfaces/base.py
get_task_description_xml
¶
get_task_description_xml() -> str
Get the task description in XML format.
You should retrieve something of the like:
<task_description>
This is the original task definition, as passed by the user...
[... More preamble ...]
Here is the task definition:
[... Task definition ...]
</task_description>
| RETURNS | DESCRIPTION |
|---|---|
str
|
The task description in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_task_definition_xml
¶
get_task_definition_xml() -> str
Get the task definition in XML format.
You should retrieve something of the like:
<task_definition>
<task_description>
[... Task description ...]
</task_description>
<task_inputs>
[... Task inputs ...]
</task_inputs>
<task_outputs>
[... Task outputs ...]
</task_outputs>
</task_definition>
| RETURNS | DESCRIPTION |
|---|---|
str
|
The task definition in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_one_variable_repr
staticmethod
¶
get_one_variable_repr(
variable: RuntimeVariable,
*,
get_earliest_value: bool = False
) -> str
Get the representation of a variable.
We only return the text representation of the variable.
You should retrieve something of the like:
| PARAMETER | DESCRIPTION |
|---|---|
variable
|
The variable to get the representation of.
TYPE:
|
get_earliest_value
|
Whether to get the earliest value of the variable.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The representation of the variable. |
Source code in conatus/agents/ai_interfaces/base.py
get_all_variables_xml
¶
get_all_variables_xml(
*,
include_images: Literal[False] = False,
exclude_variables: list[RuntimeVariable] | None = None
) -> str
get_all_variables_xml(
*,
include_images: Literal[True],
exclude_variables: list[RuntimeVariable] | None = None
) -> list[UserAIMessageContentPart]
get_all_variables_xml(
*,
include_images: bool = False,
exclude_variables: list[RuntimeVariable] | None = None
) -> str | list[UserAIMessageContentPart]
Get the representation of all variables in XML format.
You should retrieve something of the like:
<variables>
* 'user_name' of type '<class 'str'>'
repr: 'John Doe'
[... More variables ...]
</variables>
| PARAMETER | DESCRIPTION |
|---|---|
include_images
|
Whether to include images in the representation.
This means that the representation will be a list of
TYPE:
|
exclude_variables
|
The variables to exclude from the representation.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str | list[UserAIMessageContentPart]
|
The representation of all variables in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_modified_variables_xml
¶
get_modified_variables_xml(
*,
modified_variables: list[RuntimeVariable] | list[str],
include_images: Literal[False] = False
) -> str
get_modified_variables_xml(
*,
modified_variables: list[RuntimeVariable] | list[str],
include_images: Literal[True]
) -> list[UserAIMessageContentPart]
get_modified_variables_xml(
*,
modified_variables: list[RuntimeVariable] | list[str],
include_images: bool = False
) -> str | list[UserAIMessageContentPart]
Get the modified variables in XML format.
You should retrieve something of the like:
<modified_variables>
* 'user_name' of type '<class 'str'>'
repr: 'John Doe'
[... More modified variables ...]
</modified_variables>
| PARAMETER | DESCRIPTION |
|---|---|
modified_variables
|
The variables to include in the representation.
TYPE:
|
include_images
|
Whether to include images in the representation.
This means that the representation will be a list of
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str | list[UserAIMessageContentPart]
|
The modified variables in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_steps_so_far_xml
¶
Get the steps so far in XML format.
You should retrieve something of the like:
<steps_so_far>
These are the steps that have been executed so far.
[... More preamble ...]
Here are the steps so far:
[... Steps so far ...]
</steps_so_far>
| PARAMETER | DESCRIPTION |
|---|---|
include_failed
|
Whether to include failed steps.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The steps so far in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
get_last_step_xml
¶
get_last_step_xml() -> str
Get the last step in XML format.
You should retrieve something of the like:
<last_step>
This is the latest step of the task.
[... More preamble ...]
Code
-----
[... Code ...]
Messages
--------
stdout: [... stdout ...]
stderr: [... stderr ...]
</last_step>
| RETURNS | DESCRIPTION |
|---|---|
str
|
The last step in XML format. |
Source code in conatus/agents/ai_interfaces/base.py
custom_xml
staticmethod
¶
Get the text in XML format.
If you pass text="Hello, world!" and tag="greeting", you should
retrieve something of the like:
| PARAMETER | DESCRIPTION |
|---|---|
text
|
The text to get the XML representation of.
TYPE:
|
tag
|
The tag to use for the XML representation.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The text in XML format. |