conatus.models.google.google
¶
Google/Gemini models.
DEFAULT_GOOGLE_MODEL_NAME
module-attribute
¶
The default name of the Google GenAI model to use.
DEFAULT_GOOGLE_MAX_TOKENS
module-attribute
¶
The default maximum number of tokens to generate for Google Genai.
GoogleModelSpec
¶
Bases: TypedDict
The specification for a Google GenAI API call.
Note that unlike AnthropicModelSpec
,
OpenAIModelCCSpec,
or OpenAIModelResponseSpec
,
the sentinel indicating that an argument is not given is just
None.
GoogleModelConfig
dataclass
¶
GoogleModelConfig(
not_given_sentinel: object = CTUS_NOT_GIVEN,
api_key: OptionalArg[str] = CTUS_NOT_GIVEN,
model_name: str = DEFAULT_GOOGLE_MODEL_NAME,
max_tokens: int = DEFAULT_GOOGLE_MAX_TOKENS,
stdout_mode: OptionalArg[
Literal["normal", "preview", "silent"]
] = CTUS_NOT_GIVEN,
temperature: OptionalArg[float] = CTUS_NOT_GIVEN,
computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN,
use_mock: OptionalArg[bool] = CTUS_NOT_GIVEN,
only_pass_new_messages: OptionalArg[
bool
] = CTUS_NOT_GIVEN,
previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN,
truncation: OptionalArg[
Literal["auto", "disabled"]
] = CTUS_NOT_GIVEN,
)
Bases: ModelConfig
The configuration for an Google Gen AI call, with defaults.
max_tokens
class-attribute
instance-attribute
¶
max_tokens: int = DEFAULT_GOOGLE_MAX_TOKENS
The maximum number of tokens to generate.
model_name
class-attribute
instance-attribute
¶
model_name: str = DEFAULT_GOOGLE_MODEL_NAME
The name of the model to use.
not_given_sentinel
class-attribute
instance-attribute
¶
not_given_sentinel: object = CTUS_NOT_GIVEN
The sentinel object to use for missing arguments.
This is used to represent a missing argument. If we encounter this sentinel object, we will not include it in the returned dictionary.
api_key
class-attribute
instance-attribute
¶
api_key: OptionalArg[str] = CTUS_NOT_GIVEN
The API key to use, if any.
If not provided, the API key will be taken from the environment variable
specified in the api_key_env_variable
attribute of the model.
stdout_mode
class-attribute
instance-attribute
¶
stdout_mode: OptionalArg[
Literal["normal", "preview", "silent"]
] = CTUS_NOT_GIVEN
The mode to use for the standard output.
'normal': Notify the user that we're waiting for a response, and then that we're receiving the response, displaying the number of chunks received so far.
'preview': Preview the response with a fancy output that updates as the response chunks are received. Only works if the response is a stream. Ifpreviewis set and the response is not a stream, it will default to'normal'.
'silent': Do not print anything to the standard output.
Note that if we detect that we are running in a non TTY environment, we
will use a special mode called 'non_tty', unless the user asked for
'silent'.
temperature
class-attribute
instance-attribute
¶
temperature: OptionalArg[float] = CTUS_NOT_GIVEN
The temperature for the model.
computer_use_mode
class-attribute
instance-attribute
¶
computer_use_mode: OptionalArg[bool] = CTUS_NOT_GIVEN
Whether to use the computer use mode.
If set to True, the model will be configured to use the computer use
mode.
use_mock
class-attribute
instance-attribute
¶
use_mock: OptionalArg[bool] = CTUS_NOT_GIVEN
Whether to use a mock client or not.
This is useful for testing purposes.
only_pass_new_messages
class-attribute
instance-attribute
¶
only_pass_new_messages: OptionalArg[bool] = CTUS_NOT_GIVEN
Whether to only pass new messages to the model.
If set to True, the model will only pass new messages to the model,
and not the entire history. This is useful for "stateful" APIs, where
the history is not needed.
previous_messages_id
class-attribute
instance-attribute
¶
previous_messages_id: OptionalArg[str] = CTUS_NOT_GIVEN
The ID of the previous messages.
This is useful for "stateful" APIs, where the history is not needed.
This should only be used if only_pass_new_messages is True.
truncation
class-attribute
instance-attribute
¶
truncation: OptionalArg[Literal["auto", "disabled"]] = (
CTUS_NOT_GIVEN
)
The truncation to use.
to_kwargs
¶
to_kwargs(
specification: None = None,
not_given_sentinel: object = CTUS_NOT_GIVEN,
argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD
to_kwargs(
specification: type[TDSpec] | None = None,
not_given_sentinel: object = CTUS_NOT_GIVEN,
argument_mapping: dict[str, str] | None = None,
) -> ModelConfigTD | TDSpec
Return the configuration as a dictionary.
You can provide a specification, which is a dictionary that matches the arguments expected by the provider. If a specification is provided, the method will return a dictionary that matches the specification (i.e. with only the keys that are expected by the provider).
You can also provide a not_given_sentinel, which is an object
that is used to represent a missing argument. If we encounter this
sentinel object, we will not include it in the returned dictionary.
Example¶
Using a specification¶
from conatus.models.open_ai import OpenAIModel, OpenAIModelCCSpec
from openai import NOT_GIVEN
model = OpenAIModel()
args_to_pass = model.config.to_kwargs(
specification=OpenAIModelCCSpec,
not_given_sentinel=NOT_GIVEN,
)
assert args_to_pass == {'max_tokens': 4096}
# And now you can do something like:
# response = self.client.chat.completions.create(
# messages=messages,
# **args_to_pass
# )
Using an argument mapping¶
from conatus.models.open_ai import OpenAIModel, OpenAIModelResponseSpec
from openai import NOT_GIVEN
model = OpenAIModel()
args_to_pass = model.config.to_kwargs(
specification=OpenAIModelResponseSpec,
argument_mapping={"max_tokens": "max_output_tokens"},
not_given_sentinel=NOT_GIVEN,
)
assert args_to_pass == {'max_output_tokens': 4096, 'truncation': 'auto'}
| PARAMETER | DESCRIPTION |
|---|---|
specification
|
The specification to use. This should be a
TYPE:
|
not_given_sentinel
|
The sentinel object to use.
TYPE:
|
argument_mapping
|
A dictionary that maps the keys of the
configuration to the keys of the provider. The mapping is
of the form |
| RETURNS | DESCRIPTION |
|---|---|
ModelConfigTD | TDSpec
|
The configuration as a dictionary. |
| RAISES | DESCRIPTION |
|---|---|
TypeError
|
If the specification is not a |
Source code in conatus/models/config.py
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 | |
from_dict
classmethod
¶
Create a new instance from a dictionary.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The configuration as a dictionary.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The new instance. |
from_dict_instance_or_none
classmethod
¶
Create a new instance from a dictionary or an instance.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The configuration as a dictionary or an instance.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The new instance. |
Source code in conatus/models/config.py
apply_config
¶
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: Literal[True]
) -> None
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: Literal[False] = False
) -> Self
apply_config(
new_config: Self | ModelConfigTD | None,
*,
inplace: bool = False
) -> Self | None
Copy the configuration and apply new values to it.
This ensures that you can create a hierarchy of configurations.
| PARAMETER | DESCRIPTION |
|---|---|
new_config
|
The new configuration.
TYPE:
|
inplace
|
Whether to update the instance in place, or return a new copy
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self | None
|
None if the modification happens in place; otherwise, return a new instance with the modified configuration |
Source code in conatus/models/config.py
GoogleModelCallArgs
dataclass
¶
GoogleModelCallArgs(
model_config: GoogleModelConfig,
system_message: str | None,
messages: list[Content],
tools: Iterable[FunctionDeclarationDict] | None,
output_schema: SchemaDict | None,
output_schema_was_converted_to_item_object: bool = False,
)
Bases: AIModelCallArgs
The arguments for the call to the Google GenAI API.
model_config
instance-attribute
¶
model_config: GoogleModelConfig
The configuration for the AI model.
tools
instance-attribute
¶
tools: Iterable[FunctionDeclarationDict] | None
The tools for the AI model.
output_schema
instance-attribute
¶
The output schema for the AI model.
GoogleAIModel
dataclass
¶
GoogleAIModel(
model_config: (
GoogleModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
client: Client | None = None,
*,
get_default_client_if_not_given: bool = True,
api_key: str | None = None,
model_name: str | None = None,
model_type: ModelType | None = None
)
Bases: BaseAIModel
Google GenAI model.
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the Google GenAI model. This can
be a
TYPE:
|
client
|
The client to use. If not provided, a new client will be created.
TYPE:
|
get_default_client_if_not_given
|
Whether to get the default client if not provided.
TYPE:
|
api_key
|
The API key to use. If not provided, it will be read from
the
TYPE:
|
model_name
|
The name of the model to use. If not provided, it will be set to the default model name.
TYPE:
|
model_type
|
The type of model to use. This is used to determine the
model name only if
TYPE:
|
Source code in conatus/models/google/google.py
model_config
instance-attribute
¶
model_config: GoogleModelConfig
The configuration for the Google GenAI model.
api_key_env_variable
class-attribute
instance-attribute
¶
api_key_env_variable: str = 'GOOGLE_API_KEY'
The environment variable that contains the API key.
model_config_cls
instance-attribute
¶
model_config_cls: type[ModelConfig]
The class of the model configuration.
config
property
¶
config: ModelConfig
The configuration for the model.
This is a convenience property for the model_config attribute.
default_client
¶
default_client(
model_config: ModelConfig,
api_key: str | None,
**kwargs: ParamType
) -> Client
Return the default client for the Google GenAI model.
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the Google GenAI model.
TYPE:
|
api_key
|
The API key for the Google GenAI model.
TYPE:
|
**kwargs
|
Additional arguments to pass to the Google GenAI client.
TYPE:
|
Source code in conatus/models/google/google.py
default_config
¶
default_config() -> GoogleModelConfig
default_model_name
classmethod
¶
Get the default model name for the Google model.
| PARAMETER | DESCRIPTION |
|---|---|
model_type
|
The type of model to use.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
ModelName | None
|
The default model name for the Google model. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the model type is |
Source code in conatus/models/google/google.py
call
¶
call(
prompt: AIPrompt[OutputSchemaType],
model_config: (
GoogleModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the Google model using the standardized prompt and response.
from conatus import AIPrompt
from conatus.models.google import GoogleAIModel
model = GoogleAIModel()
prompt = AIPrompt("Hello, how are you?")
response = model.call(prompt)
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the Google model.
TYPE:
|
model_config
|
The configuration to use for the Google model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback
|
A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. |
**kwargs
|
Additional arguments to pass to the Google model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the Google model. |
Source code in conatus/models/google/google.py
acall
async
¶
acall(
prompt: AIPrompt[OutputSchemaType],
model_config: (
GoogleModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback: (
Callable[[str], None] | None
) = None,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the Google model using the standardized prompt and response.
For its async counterpart, see acall
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the Google model.
TYPE:
|
model_config
|
The configuration to use for the Google model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback
|
A callback for debugging purposes. This callback will be called with the response information ( e.g. the response, the model name, the usage, etc.) as a JSON string. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the Google model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the Google model. |
Source code in conatus/models/google/google.py
call_stream
¶
call_stream(
prompt: AIPrompt[OutputSchemaType],
model_config: (
GoogleModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback_stream: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
For its async counterpart, see acall_stream
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback_stream
|
A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/google/google.py
acall_stream
async
¶
acall_stream(
prompt: AIPrompt[OutputSchemaType],
model_config: (
GoogleModelConfig
| ModelConfig
| ModelConfigTD
| None
) = None,
*,
printing_mixin_cls: type[
AIModelPrintingMixin
] = AIModelPrintingMixin,
prompt_log_callback: (
Callable[[str], None] | None
) = None,
response_log_callback_stream: (
Callable[[str], None] | None
) = None,
**kwargs: ParamType
) -> AIResponse[OutputSchemaType]
Call the OpenAI model using the standardized prompt and response.
For its sync counterpart, see call_stream
.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the OpenAI model.
TYPE:
|
model_config
|
The configuration to use for the OpenAI model.
TYPE:
|
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
prompt_log_callback
|
A callback for debugging purposes. This callback will be called with the prompt information ( e.g. the messages, the model name, the tools, etc.) as a JSON string. |
response_log_callback_stream
|
A callback for debugging purposes. This callback will be called with the response information (e.g. the response, the model name, the usage, etc.) as a JSON string. Note that this callback is called for each chunk of the response, and figures it out on the backend. |
printing_mixin_cls
|
The class to use for printing.
TYPE:
|
**kwargs
|
Additional arguments to pass to the OpenAI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AIResponse[OutputSchemaType]
|
The response from the OpenAI model. |
Source code in conatus/models/google/google.py
convert_system_message_to_ai_model_format
¶
convert_system_message_to_ai_model_format(
system_message: SystemAIMessage, config: ModelConfig
) -> str
Convert the system message to the AI model format.
This method is meant to be overridden by subclasses.
| PARAMETER | DESCRIPTION |
|---|---|
system_message
|
The system message to convert.
TYPE:
|
config
|
The configuration for the AI model.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The converted system message. |
Source code in conatus/models/google/google.py
__del__
¶
Destructor for the Google GenAI model.
This is called when the object is deleted.
Source code in conatus/models/google/google.py
with_config
¶
with_config(
model_config: ModelConfig | ModelConfigTD | None,
*,
ignore_current_config: bool = False,
inplace: bool = False
) -> Self
Return a new instance of the model with the given configuration.
This is useful for quickly creating a new model without having to instantiate a new client.
from conatus.models import OpenAIModel
from conatus.models.config import ModelConfig
model = OpenAIModel()
model_with_config = model.with_config(ModelConfig(model_name="gpt-4o"))
# Note that this also works if you pass a dictionary.
model_with_config = model.with_config({"model_name": "gpt-4o"})
assert model_with_config.config.model_name == "gpt-4o"
assert model_with_config.client == model.client
| PARAMETER | DESCRIPTION |
|---|---|
model_config
|
The configuration for the new model.
TYPE:
|
ignore_current_config
|
Whether to ignore the current configuration.
If
TYPE:
|
inplace
|
Whether to modify the current instance in place. If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
A new instance of the model with the given configuration. |
Source code in conatus/models/base.py
get_api_key
¶
get_api_key() -> str
Get the API key for the model.
This function should be implemented to retrieve environment variables.
| RETURNS | DESCRIPTION |
|---|---|
str
|
The API key. |
| RAISES | DESCRIPTION |
|---|---|
AIModelAPIKeyMissingError
|
If the API key is not found in the environment variables. |
ValueError
|
If the API key is not set in the class attribute. |
Source code in conatus/models/base.py
respawn_client
¶
Respawn the client.
This method is used to respawn the client. It is mostly used so that we can refresh the client, which might be associated with an incompatible event loop.
Source code in conatus/models/base.py
simple_call
¶
simple_call(
prompt: str,
model_config: ModelConfig | ModelConfigTD | None = None,
*,
stream: bool = False
) -> str
Simple call to the AI model.
This is a convenience method for the call method.
from conatus.models import OpenAIModel
model = OpenAIModel()
q = "Which US state has never recorded temperatures below 0°F?"
response = model.simple_call(q)
# > That would be Hawaii.
| PARAMETER | DESCRIPTION |
|---|---|
prompt
|
The prompt to send to the AI model.
TYPE:
|
model_config
|
The configuration for the AI model. Passing a dictionary is recommended, so that users don't unintentionally re-establish default values.
TYPE:
|
stream
|
Whether to stream the response. If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The response from the AI model. |