AI Models ¶
Conatus provides access to multiple AI models through its built-in providers:
OpenAIModel,
AnthropicAIModel, and
GoogleAIModel. You can also create
custom AI providers by following the
How-To guide on adding a new AI provider.
Playing with AI models (OpenAI as an example) ¶
The OpenAIModel is available out of the
box and ready to use.
Setting up the API key ¶
You can configure your OpenAI API key in several ways:
- Set the
OPENAI_API_KEYenvironment variable - Pass it directly to the model constructor
- Use a
.envfile
Organization Verification might be required
Certain OpenAI models (including o3) are restricted to verified
organizations. You may encounter errors until your organization is fully
verified.
Making a simple call to OpenAI ¶
Here's a basic example of using the OpenAI model. Here, you use the
simple_call method, which is
a convenience method that takes a prompt and returns a response as a string.
from conatus.models import OpenAIModel
model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
Using actions ¶
Conatus allows you to extend the model's capabilities with custom functions,
also known as Actions. Here's an example:
from conatus import action, AIPrompt
from conatus.models import OpenAIModel
@action
def multiply_two_numbers(a: int, b: int) -> int:
return a * b
model = OpenAIModel()
prompt = AIPrompt(
user="What is 2219 times 8393?",
actions=[multiply_two_numbers],
)
response = model.call(prompt)
response_text = response.all_text
response_tool_calls = response.tool_calls
# Normally, this should be what you get from the model:
# response_text: ''
# response_tool_calls: [
# AIToolCall(
# name='multiply_two_numbers',
# returned_arguments='{"a":2219,"b":8393}',
# ...
# )
# ]
Now, as you can see, the only thing you get from the model is a list of tool
calls. The simplest way to execute those tool calls is to use the
Runtime class.
Switching between models ¶
You can easily switch between different models by modifying the model_name or
model_type parameters:
from conatus.models import OpenAIModel
model = OpenAIModel(model_name="gpt-4o-mini")
response = model.simple_call("23 choose 7?")
# > 48,620 (wrong!)
# Using o3-mini
model = OpenAIModel(model_name="o3-mini")
response = model.simple_call("23 choose 7?")
# > 245,157 (correct!)
# Using the reasoning model type
model = OpenAIModel(model_type="reasoning")
assert model.model_config.model_name == "o3"
response = model.simple_call("23 choose 7?")
# > 245,157 (correct!)
Using structured outputs ¶
You can use structured output to ensure that the model's response is of a certain format. Here's an example:
from pydantic import BaseModel
from conatus import AIPrompt
from conatus.models import OpenAIModel
class Result(BaseModel):
result: int
model = OpenAIModel()
prompt = AIPrompt(
user="What is 2219 times 8393?",
output_schema=Result,
)
response = model.call(prompt)
print(response.structured_output)
# > Result(result=18624067)
Note that this works with pretty much any type. No need to use Pydantic!
Other examples of structured outputs
These examples should work as well:
from conatus import AIPrompt
from dataclasses import dataclass
from typing_extensions import TypedDict
@dataclass
class ResultDataclass:
result: int
class NestedResultTypedDict(TypedDict):
result: int
other_result: ResultDataclass
# Return a simple integer
prompt = AIPrompt(
user="What is 2219 times 8393?",
output_schema=int,
)
# Return a dataclass
prompt = AIPrompt(
user="What is 2219 times 8393?",
output_schema=ResultDataclass,
)
# Return a TypedDict
prompt = AIPrompt(
user="What is 2219 times 8393? And what is 2219 times 8394?",
output_schema=NestedResultTypedDict,
)
Using Anthropic or Google ¶
Anthropic Integration ¶
To use AnthropicAIModel, install
the required package:
Set your API key using the ANTHROPIC_API_KEY environment variable.
Google AI Integration ¶
To use GoogleAIModel, install
the required package:
Set your API key using the GOOGLE_API_KEY environment variable.
Configuration ¶
You can configure the model using the model_config argument either during
initialization or at runtime:
from conatus.models import OpenAIModel
from conatus.models.open_ai import OpenAIModelConfig
# Configuration during initialization
model = OpenAIModel(model_config={"temperature": 0.5})
assert model.model_config.temperature == 0.5
# Alternative configuration method
model = OpenAIModel(model_config=OpenAIModelConfig(temperature=0.7))
assert model.model_config.temperature == 0.7
# Runtime configuration
# Note: We recommend using a dictionary to avoid unintentionally resetting
# default values
response = model.simple_call(
"What is the world's oldest newspaper still in circulation?",
model_config={"temperature": 0.9},
)
# > The world's oldest newspaper still in circulation is the public
# > record from the government of Sweden.
Advanced usage ¶
Using the Runtime class to execute tool calls ¶
The Runtime class is a tool that allows you to
execute tool calls. It can be used to execute tool calls in a loop, or to
execute tool calls in a conversation.
Here's an example of how to use the Runtime
class to execute tool calls. Note that this only simulates a two-turn
conversation between the model and the user, but you can easily make it a loop.
from conatus import action, AIPrompt
from conatus.runtime import Runtime
from conatus.models import OpenAIModel
@action
def multiply_two_numbers(a: int, b: int) -> int:
return a * b
def use_prompt_and_runtime() -> None:
model = OpenAIModel()
runtime = Runtime(actions=[multiply_two_numbers], hide_from_ai=True)
original_prompt = AIPrompt( # (1)!
user="What is 2219 times 8393?",
actions=[multiply_two_numbers],
)
response = model.call(original_prompt)
success, tool_responses = runtime.run(tool_calls=response.tool_calls)
print(tool_responses[0].content)
# > {'result': 18624067, 'success': True}
new_prompt = AIPrompt( # (2)!
previous_messages=[*original_prompt.messages, response.message_received],
new_messages=tool_responses,
)
final_response = model.call(new_prompt)
print(final_response.all_text)
# > 2219 times 8393 is 18,624,067.
# Uncomment the following line to see the example in action:
# use_prompt_and_runtime()
-
First prompt: Ask the model to multiply two numbers, potentially using the
multiply_two_numbersaction. Normally, the model will return a list of tool calls. -
Now we can pass the original prompt, the response from the model, and the tool responses to the model. This create a conversation between the original prompt and the tool responses.
Doing a multi-turn conversation with the model ¶
As you probably saw in previous examples, the AIPrompt
class can be instantiated in
multiple ways:
- You can simply pass a
userprompt, and the model will respond with a message. - You can also pass a list of
previous_messages, as well as list ofnew_messages, which is useful to simulate a multi-turn conversation.
Here's an example of how to do a multi-turn conversation with the model. Here, we just ask the model to be progressively more unhinged to keep the loop going.
from conatus import AIPrompt, UserAIMessage
from conatus.models import OpenAIModel
max_turns = 5
model = OpenAIModel(model_name="gpt-4.1", model_config={"temperature": 1})
prompt = AIPrompt(user="Tell me a 20-word story about chickens.")
for i in range(max_turns):
model.model_config.temperature += 0.1 # (1)!
response = model.call(prompt)
print(response.all_text)
prompt = AIPrompt(
previous_messages=[*prompt.messages, response.message_received],
new_messages=[UserAIMessage(content="Make it more unhinged")],
)
- Let's have fun, shall we?