OpenAI
Introduction¶
The OpenAIModel class is a wrapper
around the OpenAI API. It implements the BaseAIModel
interface.
Chat Completions API vs Responses API¶
The OpenAI API has two different ways to interact with the model 1:
- The Chat Completions API , introduced in 2022
- The Responses API , introduced in 2024
By default, OpenAIModel uses the
Responses API, but we support the Chat Completions API as well. You can
choose to use the Chat Completions API by setting the
uses_chat_completion_api parameter to True when initializing the model.
Note that some features are not supported by the Chat Completions API, such as computer use.
Examples¶
Don't forget the API key!
For the following examples to work, you need to set the OPENAI_API_KEY
environment variable. We also support .env files, so you can use that
instead of the environment variable. You can also pass the API key as an
argument to the constructor.
Simple call¶
from conatus.models import OpenAIModel
model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
Call with actions¶
from conatus import action, AIPrompt
from conatus.models import OpenAIModel
@action
def multiply_two_numbers(a: int, b: int) -> int:
return a * b
model = OpenAIModel()
prompt = AIPrompt(
user="What is 2219 times 8393?",
actions=[multiply_two_numbers],
)
response = model.call(prompt)
# > AIResponse(...)
Switch between models¶
from conatus.models import OpenAIModel
model = OpenAIModel(model_name="gpt-4o-mini")
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067
model = OpenAIModel(model_name="o3-mini")
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067
model = OpenAIModel(model_type="reasoning")
assert model.model_config.model_name == "o3"
response = model.simple_call("What is 2219 times 8393?")
# > 18 624 067
Configuration¶
You can configure the model using the model_config argument either at
initialization or later on:
from conatus.models import OpenAIModel
from conatus.models.open_ai import OpenAIModelConfig
# This works
model = OpenAIModel(model_config={"temperature": 0.5})
assert model.model_config.temperature == 0.5
# This is essentially equivalent
model = OpenAIModel(model_config=OpenAIModelConfig(temperature=0.7))
assert model.model_config.temperature == 0.7
# You can also define the configuration as you call the model
# We recommend passing a dictionary here, so that users don't unintentionally
# re-establish default values.
response = model.simple_call(
"What is the world's oldest newspaper still in circulation?",
model_config={"temperature": 0.9},
)
# > The world's oldest newspaper still in circulation is the public
# > record from the government of Sweden.
Message conversion methods are omitted on this page
The OpenAIModel class contains
a number of private methods related to the conversion of messages from
their OpenAI specification to AIMessage
objects, and vice-versa.
These methods are not documented on this website, but you can look at
the source code if you're curious. This might be of particular interest
if you want to implement your own BaseAIModel
subclass.
For more information on how to implement your own model, see the "How-to: Add a new AI provider"
Model¶
conatus.models.open_ai.open_ai.OpenAIModel
dataclass
¶
OpenAIModel(
model_config: (
OpenAIModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
client: AsyncOpenAI | None = None,
*,
get_default_client_if_not_given: bool = True,
api_key: str | None = None,
model_name: str | None = None,
model_type: ModelType | None = None,
uses_chat_completion_api: bool = False
)
Bases: BaseAIModel
OpenAI model.
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the OpenAI model. This can be a
TYPE:
|
client
|
The client to use. If not provided, a new client will be created.
TYPE:
|
get_default_client_if_not_given
|
Whether to get the default client if not provided.
TYPE:
|
api_key
|
The API key to use. If not provided, it will be read from
the
TYPE:
|
model_name
|
The name of the model to use. If not provided, it will be set to the default model name.
TYPE:
|
model_type
|
The type of model to use. This is used to determine the
model name only if
TYPE:
|
uses_chat_completion_api
|
Whether the model uses the Chat
Completions API. If not provided, it will be set to
TYPE:
|
Source code in conatus/models/open_ai/open_ai.py
model_config
instance-attribute
¶
model_config: OpenAIModelConfig
The configuration for the OpenAI model.
api_key_env_variable
class-attribute
instance-attribute
¶
api_key_env_variable: str = 'OPENAI_API_KEY'
The environment variable that contains the API key.
uses_chat_completion_api
instance-attribute
¶
uses_chat_completion_api: bool = uses_chat_completion_api
Whether the model uses the chat completions API.
False by default, meaning that the model uses the responses API.
model_config_cls
instance-attribute
¶
model_config_cls: type[ModelConfig]
The class of the model configuration.
config
property
¶
config: ModelConfig
The configuration for the model.
This is a convenience property for the model_config attribute.
default_model_name
classmethod
¶
Get the default model name for the OpenAI model.
| PARAMETER | DESCRIPTION |
|---|---|
model_type
|
The type of model to use.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
ModelName | None
|
The default model name for the OpenAI model. |
Source code in conatus/models/open_ai/open_ai.py
default_client
¶
default_client(
model_config: ModelConfig,
api_key: str | None,
**kwargs: ParamType
) -> AsyncOpenAI
Return the default client for the OpenAI model.
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the OpenAI model.
TYPE:
|
api_key
|
The API key for the OpenAI model. Takes precedence over the API key in the model config.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI client.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AsyncOpenAI
|
The OpenAI client. |
Source code in conatus/models/open_ai/open_ai.py
default_config
¶
default_config() -> OpenAIModelConfig
__del__
¶
call
¶
call(
prompt: AIPrompt[OutputSchemaType],
model_config: (
OpenAIModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
from conatus import AIPrompt
from conatus.models.open_ai import OpenAIModel
model = OpenAIModel()
prompt = AIPrompt("Hello, how are you?")
response = model.call(prompt)
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback
|
A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. |
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/open_ai/open_ai.py
acall
async
¶
acall(
prompt: AIPrompt[OutputSchemaType],
model_config: (
OpenAIModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback: (
Callable[[str], None] | None
) = None,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
For its async counterpart, see acall
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback
|
A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/open_ai/open_ai.py
call_stream
¶
call_stream(
prompt: AIPrompt[OutputSchemaType],
model_config: (
OpenAIModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback_stream: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
For its async counterpart, see acall_stream
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback_stream
|
A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/open_ai/open_ai.py
acall_stream
async
¶
acall_stream(
prompt: AIPrompt[OutputSchemaType],
model_config: (
OpenAIModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback_stream: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
For its sync counterpart, see call_stream
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback_stream
|
A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/open_ai/open_ai.py
with_config
¶
with_config(
model_config: ModelConfig | ModelConfigTD | None,
*,
ignore_current_config: bool = False,
inplace: bool = False
) -> Self
Return a new instance of the model with the given configuration.
This is useful for quickly creating a new model without having to instantiate a new client.
from conatus.models import OpenAIModel
from conatus.models.config import ModelConfig
model = OpenAIModel()
model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))
# Note that this also works if you pass a dictionary.
model_with_config = model.with_config({"model_name": "gpt-4o"})
assert model_with_config.config.model_name == "gpt-4o"
assert model_with_config.client == model.client
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the new model.
TYPE:
|
ignore_current_config
|
Whether to ignore the current configuration.
If
TYPE:
|
inplace
|
Whether to modify the current instance in place. If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
A new instance of the model with the given configuration. |
Source code in conatus/models/base.py
get_api_key
¶
get_api_key() -> str
Get the API key for the model.
This function should be implemented to retrieve environment variables.
| RETURNS | DESCRIPTION |
|---|---|
str
|
The API key. |
| RAISES | DESCRIPTION |
|---|---|
AIModelAPIKeyMissingError
|
If the API key is not found in the environment variables. |
ValueError
|
If the API key is not set in the class attribute. |
Source code in conatus/models/base.py
respawn_client
¶
Respawn the client.
This method is used to respawn the client. It is mostly used so that we can refresh the client, which might be associated with an incompatible event loop.
Source code in conatus/models/base.py
simple_call
¶
simple_call(
prompt: str,
model_config: ModelConfig | ModelConfigTD | None = None,
*,
stream: bool = False
) -> str
Simple call to the AI model.
This is a convenience method for the call method.
from conatus.models import OpenAIModel
model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the AI model.
TYPE:
|
model_config
|
The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
stream
|
Whether to stream the response. If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The response from the AI model. |
Source code in conatus/models/base.py
Configuration class¶
There are three relevant classes for the configuration of the OpenAI model:
OpenAIModelConfig, which handles all the configuration values for the OpenAI model.OpenAIModelCCSpec, which handles the configuration values that are specific to the Chat Completions API. Note here that this is aTypedDict, because we need to pass them as keyword arguments to the OpenAI API.OpenAIModelResponseSpec, which handles the configuration values that are specific to the Responses API. Note here that this is aTypedDict, because we need to pass them as keyword arguments to the OpenAI API.
conatus.models.open_ai.open_ai.OpenAIModelConfig
dataclass
¶
OpenAIModelConfig(
not_given_sentinel: object = NOT_GIVEN,
api_key: OptionalArg[str] = CTUS_NOT_GIVEN,
model_name: str = DEFAULT_OPENAI_MODEL_NAME,
max_tokens: int = DEFAULT_OPENAI_MAX_TOKENS,
stdout_mode: Literal[
"normal", "preview", "silent"
] = "preview",
temperature: float | NotGiven = NOT_GIVEN,
computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN,
use_mock: bool = DEFAULT_OPENAI_USE_MOCK,
only_pass_new_messages: OptionalArg[
bool
] = CTUS_NOT_GIVEN,
previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN,
truncation: Literal["auto", "disabled"] = "auto",
timeout: float | NotGiven | None = NOT_GIVEN,
mock_stream_cc_fn: (
IteratorAsyncFn[ChatCompletionChunk] | None
) = None,
mock_stream_response_fn: (
IteratorAsyncFn[ResponseStreamEvent] | None
) = None,
mock_cc_fn: AsyncFn[ChatCompletion] | None = None,
mock_response_fn: AsyncFn[Response] | None = None,
reasoning_effort: (
Literal["low", "medium", "high"] | None
) = None,
reasoning_summary: (
Literal["auto", "concise", "detailed"] | None
) = None,
)
Bases: ModelConfig
The configuration for an OpenAI model, with defaults.
not_given_sentinel
class-attribute
instance-attribute
¶
not_given_sentinel: object = NOT_GIVEN
The sentinel value for not given.
model_name
class-attribute
instance-attribute
¶
model_name: str = DEFAULT_OPENAI_MODEL_NAME
The name of the model to use.
max_tokens
class-attribute
instance-attribute
¶
max_tokens: int = DEFAULT_OPENAI_MAX_TOKENS
The maximum number of tokens to use.
temperature
class-attribute
instance-attribute
¶
temperature: float | NotGiven = NOT_GIVEN
The temperature to use.
use_mock
class-attribute
instance-attribute
¶
use_mock: bool = DEFAULT_OPENAI_USE_MOCK
Whether to use a mock response.
If so, we will use MockChatCompletion
objects to mock the
response.
timeout
class-attribute
instance-attribute
¶
timeout: float | NotGiven | None = NOT_GIVEN
The timeout to use.
stdout_mode
class-attribute
instance-attribute
¶
stdout_mode: Literal["normal", "preview", "silent"] = (
"preview"
)
The mode to use for the standard output.
'normal': Notify the user that we're waiting for a response, and then that we're receiving the response, displaying the number of chunks received so far.
'preview': Preview the response with a fancy output that updates as the response chunks are received. Only works if the response is a stream. Ifpreviewis set and the response is not a stream, it will default to'normal'.
'silent': Do not print anything to the standard output.
Note that if we detect that we are running in a non TTY environment, we
will use a special mode called 'non_tty', unless the user asked for
'silent'.
truncation
class-attribute
instance-attribute
¶
truncation: Literal['auto', 'disabled'] = 'auto'
The truncation to use.
mock_stream_cc_fn
class-attribute
instance-attribute
¶
mock_stream_cc_fn: (
IteratorAsyncFn[ChatCompletionChunk] | None
) = None
The function to use to mock the stream.
Note that if it passed, we will NOT call the OpenAI API.
mock_stream_response_fn
class-attribute
instance-attribute
¶
mock_stream_response_fn: (
IteratorAsyncFn[ResponseStreamEvent] | None
) = None
The function to use to mock the response.
Note that if it passed, we will NOT call the OpenAI API.
mock_cc_fn
class-attribute
instance-attribute
¶
mock_cc_fn: AsyncFn[ChatCompletion] | None = None
The function to use to mock the response.
Note that if it passed, we will NOT call the OpenAI API.
mock_response_fn
class-attribute
instance-attribute
¶
mock_response_fn: AsyncFn[Response] | None = None
The function to use to mock the response.
Note that if it passed, we will NOT call the OpenAI API.
reasoning_effort
class-attribute
instance-attribute
¶
reasoning_effort: (
Literal["low", "medium", "high"] | None
) = None
The reasoning effort to use.
Only used if the model name is a reasoning model.
reasoning_summary
class-attribute
instance-attribute
¶
reasoning_summary: (
Literal["auto", "concise", "detailed"] | None
) = None
The reasoning summary to use.
Only used if the model name is a reasoning model.
api_key
class-attribute
instance-attribute
¶
api_key: OptionalArg[str] = CTUS_NOT_GIVEN
The API key to use, if any.
If not provided, the API key will be taken from the environment variable
specified in the api_key_env_variable
attribute of the model.
computer_use_mode
class-attribute
instance-attribute
¶
computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN
Whether to use the computer use mode.
If set to True, the model will be configured to use the computer use
mode.
only_pass_new_messages
class-attribute
instance-attribute
¶
only_pass_new_messages: OptionalArg[bool] = CTUS_NOT_GIVEN
Whether to only pass new messages to the model.
If set to True, the model will only pass new messages to the model,
and not the entire history. This is useful for "stateful" APIs, where
the history is not needed.
previous_messages_id
class-attribute
instance-attribute
¶
previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN
The ID of the previous messages.
This is useful for "stateful" APIs, where the history is not needed.
This should only be used if only_pass_new_messages is True.
get_kwargs
¶
get_kwargs(api_used: Literal['cc']) -> OpenAIModelCCSpec
get_kwargs(
api_used: Literal["response"],
) -> OpenAIModelResponseSpec
get_kwargs(
api_used: Literal["cc", "response"],
) -> OpenAIModelCCSpec | OpenAIModelResponseSpec
Wrapper around the to_kwargs method.
| PARAMETER | DESCRIPTION |
|---|---|
api_used
|
The API to use.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
OpenAIModelCCSpec | OpenAIModelResponseSpec
|
The keyword arguments. |
Source code in conatus/models/open_ai/open_ai.py
to_kwargs
¶
to_kwargs(
specification: None = None,
not_given_sentinel: object = CTUS_NOT_GIVEN,
argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD
to_kwargs(
specification: type[TDSpec] | None = None,
not_given_sentinel: object = CTUS_NOT_GIVEN,
argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec
Return the configuration as a dictionary.
You can provide a specification, which is a dictionary that matches the arguments expected by the provider. If a specification is provided, the method will return a dictionary that matches the specification (i.e. with only the keys that are expected by the provider).
You can also provide a not_given_sentinel, which is an object
that is used to represent a missing argument. If we encounter this
sentinel object, we will not include it in the returned dictionary.
Example¶
Using a specification¶
from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
from openai import NOT_GIVEN
model = OpenAIModel()
args_to_pass = model.config.to_kwargs(
specification=OpenAIModelCCSpec,
not_given_sentinel=NOT_GIVEN,
)
assert args_to_pass == {'max_tokens': 4096}
# And now you can do something like:
# response = self.client.chat.completions.create(
# messages=messages,
# **args_to_pass
# )
Using an argument mapping¶
from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
from openai import NOT_GIVEN
model = OpenAIModel()
args_to_pass = model.config.to_kwargs(
specification=OpenAIModelResponseSpec,
argument_mapping={"max_tokens": "max_output_tokens"},
not_given_sentinel=NOT_GIVEN,
)
assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
| PARAMETER | DESCRIPTION |
|---|---|
specification
|
The specification to use. This should be a
TYPE:
|
not_given_sentinel
|
The sentinel object to use.
TYPE:
|
argument_mapping
|
A dictionary that maps the keys of the
configuration to the keys of the provider. The mapping is
of the form |
| RETURNS | DESCRIPTION |
|---|---|
ModelConfigTD | TDSpec
|
The configuration as a dictionary. |
| RAISES | DESCRIPTION |
|---|---|
TypeError
|
If the specification is not a |
Source code in conatus/models/config.py
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 | |
from_dict
classmethod
¶
Create a new instance from a dictionary.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The configuration as a dictionary.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The new instance. |
from_dict_instance_or_none
classmethod
¶
Create a new instance from a dictionary or an instance.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The configuration as a dictionary or an instance.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The new instance. |
Source code in conatus/models/config.py
apply_config
¶
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: Literal[True]
) -> None
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: Literal[False] = False
) -> Self
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: bool = False
) -> Self | None
Copy the configuration and apply new values to it.
This ensures that you can create a hierarchy of configurations.
| PARAMETER | DESCRIPTION |
|---|---|
new_config
|
The new configuration.
TYPE:
|
inplace
|
Whether to update the instance in place, or return a new copy
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self | None
|
None if the modification happens in place; otherwise, return a new instance with the modified configuration |
Source code in conatus/models/config.py
conatus.models.open_ai.open_ai.OpenAIModelCCSpec
¶
Bases: TypedDict
The arguments expected by the OpenAI Chat Completions API.
This is distinct from OpenAIModelConfig
. These arguments are only
the ones that are passed to the OpenAI client during the call method.
In other words, any other arguments in the OpenAIModelConfig
, which can be meant for
general configuration purposes (such as the API key), are not included
here.
conatus.models.open_ai.open_ai.OpenAIModelResponseSpec
¶
Bases: TypedDict
The arguments expected by the OpenAI Responses API.
This is distinct from OpenAIModelConfig
. These arguments are only
the ones that are passed to the OpenAI client during the call method.
In other words, any other arguments in the OpenAIModelConfig
, which can be meant for
general configuration purposes (such as the API key), are not included
here.
| ATTRIBUTE | DESCRIPTION |
|---|---|
previous_response_id |
The ID of the previous response. The Responses API allows you to use the ID of a previous response to generate a new response. This is useful for things like follow-ups and continuations.
TYPE:
|
max_output_tokens |
The maximum number of tokens to use.
TYPE:
|
truncation |
The truncation setting to use.
TYPE:
|
previous_response_id |
The ID of the previous response. This is used for stateful capabilities.
TYPE:
|
instructions |
The instructions to use. This is used to pass the system message to the API.
TYPE:
|
reasoning |
The reasoning parameters to use. This is used to pass the reasoning parameters to the API.
TYPE:
|
Defaults¶
conatus.models.open_ai.open_ai.DEFAULT_OPENAI_MODEL_NAME
module-attribute
¶
The default model name.
This is not a stable API. At any given release, the default model
name may change based on OpenAI's latest model releases. If you need
to specify a model name, please do so in the config argument.
conatus.models.open_ai.open_ai.DEFAULT_OPENAI_MAX_TOKENS
module-attribute
¶
The default maximum number of tokens.
conatus.models.open_ai.open_ai.DEFAULT_OPENAI_TEMPERATURE
module-attribute
¶
The default temperature value.
By default, we do not set a temperature value, and let the API figure out the right temperature.
conatus.models.open_ai.open_ai.DEFAULT_OPENAI_TIMEOUT
module-attribute
¶
The default timeout value.
By default, we do not set a timeout value, and defer to the API's default.
conatus.models.open_ai.open_ai.DEFAULT_OPENAI_USE_MOCK
module-attribute
¶
The default use mock value.
Mocks¶
Developer only
The following classes are mocks (e.g. simulated responses) for the OpenAI model. They are not meant to be used directly, but are useful for testing.
conatus.models.mocks.open_ai
¶
Mocks for OpenAIModel.
MockChatCompletion
¶
MockChatCompletion(content: str)
Bases: ChatCompletion
Mock OpenAI chat completion that can be used to test the OpenAI model.
The structure of that mock is very simple; it is supposed to be the simplest possible message that is retrieved from the OpenAI API.
At initialization, the user prompt is passed in and the mock is initialized with a single assistant message with the user prompt as the content.
| PARAMETER | DESCRIPTION |
|---|---|
content
|
The content of the mock chat completion.
TYPE:
|
Source code in conatus/models/mocks/open_ai.py
MockResponse
¶
MockResponse(content: str)
Bases: Response
Mock OpenAI response that can be used to test the OpenAI model.
The structure follows the expected structure of payloads sent by OpenAI's Responses API.
At initialization, the user prompt is passed in and the mock is initialized with a single assistant message with the user prompt as the content.
Source code in conatus/models/mocks/open_ai.py
create_mock_chat_completion_chunk_cc
¶
create_mock_chat_completion_chunk_cc() -> (
list[ChatCompletionChunk]
)
Create a mock chat completion chunk.
| RETURNS | DESCRIPTION |
|---|---|
list[ChatCompletionChunk]
|
List of mock chat completion chunks. |
Source code in conatus/models/mocks/open_ai.py
200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | |
create_mock_response_stream_events
¶
create_mock_response_stream_events() -> (
list[ResponseStreamEvent]
)
Create a list of mock response stream events.
| RETURNS | DESCRIPTION |
|---|---|
list[ResponseStreamEvent]
|
List of mock response stream events. |
Source code in conatus/models/mocks/open_ai.py
471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 | |
create_mock_stream_cc
async
¶
create_mock_stream_cc(
chunks: list[ChatCompletionChunk], delay: float = 0.1
) -> AsyncIterator[ChatCompletionChunk]
Create a mock stream of chat completion chunks.
| PARAMETER | DESCRIPTION |
|---|---|
chunks
|
List of chunks to stream (typically ChatCompletionChunk objects)
TYPE:
|
delay
|
Delay between chunks in seconds (default: 0.1)
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[ChatCompletionChunk]
|
AsyncIterator yielding the chunks with specified delay |
Source code in conatus/models/mocks/open_ai.py
create_mock_stream_response
async
¶
create_mock_stream_response(
chunks: list[ResponseStreamEvent], delay: float = 0.1
) -> AsyncIterator[ResponseStreamEvent]
Create a mock stream of response events.
| PARAMETER | DESCRIPTION |
|---|---|
chunks
|
List of response events to stream
TYPE:
|
delay
|
Delay between chunks in seconds (default: 0.1)
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[ResponseStreamEvent]
|
AsyncIterator yielding the chunks with specified delay |
Source code in conatus/models/mocks/open_ai.py
mock_stream_cc
async
¶
mock_stream_cc(
delay: float = 0,
chunks: list[ChatCompletionChunk] | None = None,
) -> AsyncIterator[ChatCompletionChunk]
Fake stream of OpenAI Chat Completion chunks.
| PARAMETER | DESCRIPTION |
|---|---|
delay
|
Delay between chunks in seconds (default: 0)
TYPE:
|
chunks
|
List of chunks to stream. If not provided, the default mock chat completion chunks are used.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AsyncIterator[ChatCompletionChunk]
|
AsyncIterator yielding the chunks with specified delay |
Source code in conatus/models/mocks/open_ai.py
mock_stream_response
async
¶
mock_stream_response(
delay: float = 0,
chunks: list[ResponseStreamEvent] | None = None,
) -> AsyncIterator[ResponseStreamEvent]
Fake stream of OpenAI response events.
| PARAMETER | DESCRIPTION |
|---|---|
delay
|
Delay between chunks in seconds (default: 0)
TYPE:
|
chunks
|
List of response events to stream. If not provided, the default mock response stream events are used.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AsyncIterator[ResponseStreamEvent]
|
AsyncIterator yielding the chunks with specified delay |
Source code in conatus/models/mocks/open_ai.py
mock_cc
async
¶
mock_cc(
content: str | None = None,
message: ChatCompletion | None = None,
) -> ChatCompletion
Mock OpenAI chat completion.
By default, you get a mock chat completion with a simple message. If you
want to get a message with a specific content, you can pass a content
argument. You can also pass a message argument to get a message with a
specific structure.
| PARAMETER | DESCRIPTION |
|---|---|
content
|
The content of the mock chat completion.
TYPE:
|
message
|
The message to return.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
ChatCompletion
|
The mock chat completion. |
Source code in conatus/models/mocks/open_ai.py
mock_response
async
¶
mock_response(
content: str | None = None,
message: Response | None = None,
) -> Response
Mock OpenAI response.
By default, you get a mock response with a simple message. If you want to
get a message with a specific content, you can pass a content argument.
You can also pass a message argument to get a message with a specific
structure.
| PARAMETER | DESCRIPTION |
|---|---|
content
|
The content of the mock response.
TYPE:
|
message
|
The message to return.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Response
|
The mock response. |
Source code in conatus/models/mocks/open_ai.py
-
No, the Assistants API does not count 😇. ↩