Using AI models directly ¶
While Conatus is designed to be used at the Task
level, it is also possible to use the AI models directly. This tutorial goes
through the main ways to do so.
Installation ¶
Conatus supports multiple AI providers. In this tutorial, we'll use OpenAI, but we also support Anthropic and Google .
Note that only OpenAI is installed by default. To install the other providers, use the following command:
Configure your API key ¶
Set the OPENAI_API_KEY environment variable (or use a .env file):
Alternatively, you can pass the API key to the model, or use a .env file.
Making prompts and calling the model ¶
A first, simple prompt ¶
Create an AIPrompt, call the
model, and view the
AIResponse.
To make the example simpler, we'll use the
simple_call method, which
returns the raw text of the response.
from conatus.models import OpenAIModel
from conatus import AIPrompt
model = OpenAIModel()
response = model.simple_call("Tell me a joke about chickens.")
print(response)
# > "Why did the chicken join a band? Because it had the drumsticks."
This is a simple example. You can also:
- pass a list of messages to the
messagesargument - pass a system message to the
systemargument
For more information, see the AIPrompt API reference, or the Messages concept page.
Streaming responses ¶
You can choose to either stream the response or not.
If you want to get the response as a single string, you can use the
call method. Otherwise, you can use
the call_stream method. In both
cases, we'll need to create an AIPrompt
object.
from conatus.models import OpenAIModel
from conatus import AIPrompt
model = OpenAIModel()
prompt = AIPrompt(user="Tell me a joke about chickens.")
response = model.call(prompt)
print(response.all_text)
response = model.call_stream(prompt)
print(response.all_text)
As you can see, in the second case, the response is streamed to the console
before it disappears. For the rest of the tutorial, we'll use the call_stream
method.
Using tools (functions/actions) in prompts ¶
You can pass a list of functions/actions to the actions argument of the
AIPrompt class. The AI model will then decide whether to call them.
We'll stop using the
simple_call method, and use the
full call method instead.
from conatus import action, AIPrompt
from conatus.models import OpenAIModel
def add_numbers(a: int, b: int) -> int:
return a + b
model = OpenAIModel()
prompt = AIPrompt(user="What is 99 plus 101?", actions=[add_numbers])
response = model.call_stream(prompt)
print(response.tool_calls[0].name, response.tool_calls[0].arguments_as_str)
# > add_numbers {"a":99,"b":101}
Responding to tool calls ¶
To enable tool calls, we need to use the Runtime
class to execute them.
Simple example ¶
Let's start with a simple example:
from conatus.runtime import Runtime
from conatus import action, AIPrompt
from conatus.models import OpenAIModel
@action
def multiply(a: int, b: int) -> int:
return a * b
model = OpenAIModel()
runtime = Runtime(actions=[multiply], hide_from_ai=True)
prompt = AIPrompt(
user="What is 23 times 19?",
actions=[multiply],
)
response = model.call_stream(prompt)
success, tool_responses = runtime.run(tool_calls=response.tool_calls)
print("Tool responses:", tool_responses)
Pandas DataFrame example ¶
Let's say we want to use a tool to generate a pandas DataFrame. Unfortunately, traditional AI agents will not be able to do this, because they require every functions to have JSON-serializable arguments and return values.
Conatus, however, does not have this limitation. Let's see how to use a tool that returns a pandas DataFrame.
import pandas as pd
import random
from conatus import AIPrompt, action
from conatus.models import OpenAIModel
from conatus.runtime import Runtime
@action
def add_new_user(df: pd.DataFrame, name: str, age: int) -> pd.DataFrame:
new_data = {"name": [name], "age": [age], "uid": [random.randint(1000, 9999)]}
return pd.concat([df, pd.DataFrame(new_data)], ignore_index=True)
df = pd.DataFrame({"name": ["John", "Jane"], "age": [25, 30], "uid": [1234, 5678]})
runtime = Runtime(actions=[add_new_user], starting_variables={"df": df})
model = OpenAIModel()
prompt = AIPrompt(
system=f"Available variables in the runtime are: {runtime.variables}",
user="Add: name=Jimmy, age=35, and tell me his new uid",
tools=runtime.get_tool_specifications(),
)
response = model.call_stream(prompt)
print(response.tool_calls[0].name, response.tool_calls[0].arguments_as_str)
# > add_new_user {"df":"<<var:df>>","name":"Jimmy","age":35,"return":"df"}
success, tool_responses = runtime.run(tool_calls=response.tool_calls)
new_prompt = AIPrompt(
previous_messages=[*prompt.messages, response.message_received],
new_messages=tool_responses,
)
response = model.call_stream(new_prompt)
print(response.all_text)
# > Jimmy has been added successfully! His new UID is 3700.
Making a conversation multi-turn ¶
So far, we've only made a single-turn conversation. To make a multi-turn
conversation, we need to pass the conversation history to the model. We can do
this by passing the previous_messages and new_messages arguments to the
AIPrompt constructor.
from conatus.models import OpenAIModel
from conatus import AIPrompt, UserAIMessage
model = OpenAIModel()
prompt = AIPrompt(user="Give me a 10-word story about space chickens.")
for i in range(3):
response = model.call_stream(prompt)
print(f"[{i+1}] AI:", response.all_text)
prompt = AIPrompt(
previous_messages=[*prompt.messages, response.message_received],
new_messages=[UserAIMessage(content="Make it even weirder!")],
)
Note that you mix and match the previous_messages and new_messages arguments
with other arguments like actions and output_schema. For more information,
see the AIPrompt API reference.
Structured outputs ¶
Request a response matching a specific schema using
output_schema:
from pydantic import BaseModel
from conatus import AIPrompt
from conatus.models import OpenAIModel
class WeatherReport(BaseModel):
city: str
forecast: str
temperature_celsius: float
prompt = AIPrompt(
user="Weather report for São Paulo.",
output_schema=WeatherReport,
)
response = OpenAIModel().call_stream(prompt)
print(response.structured_output)
# e.g. WeatherReport(city='São Paulo', forecast='Sunny', temperature_celsius=27.0)
For more information, see the API reference for
AIResponse.structured_output.
Putting it all together ¶
Combine all pieces for an iterative, tool-using conversation:
from conatus import action, AIPrompt, UserAIMessage
from conatus.runtime import Runtime
from conatus.models import OpenAIModel
@action
def fib(n: int) -> int:
if n < 2: return n
return fib(n-1) + fib(n-2)
model = OpenAIModel()
runtime = Runtime(actions=[fib])
prompt = AIPrompt(
user="What is the 7th Fibonacci number?",
actions=[fib],
)
history = []
for _ in range(2): # Expand as many turns as you like
response = model.call_stream(prompt)
print("AI:", response.all_text or response.tool_calls)
# Run tools if the AI requests them
if response.tool_calls:
_, tool_responses = runtime.run(tool_calls=response.tool_calls)
prompt = AIPrompt(
previous_messages=[*prompt.messages, response.message_received],
new_messages=tool_responses,
)
else:
break # No more tool calls; conversation ends
For more information, see the API reference for
AIResponse.tool_calls.
Inspecting usage and cost ¶
Every response tracks usage statistics
(CompletionUsage) and
pricing:
from conatus.models import OpenAIModel
from conatus import AIPrompt
model = OpenAIModel()
prompt = AIPrompt(user="How many tokens in this sentence?")
response = model.call_stream(prompt)
print("Model name:", response.usage.model_name)
print("Input tokens used:", response.usage.prompt_tokens)
print("Output tokens used:", response.usage.completion_tokens)
print("Total tokens used:", response.usage.total_tokens)
print("Estimated cost (USD):", response.cost)