Printing AI messages¶
The AIModelPrintingMixin
class is a mixin that is used to print messages during the AI model call.
This class is not meant to display the final response to the user, but just to serve as an indicator of what is going on. After the AI model call, the terminal is cleaned up.
The mixin offers three modes:
normal: The default value. If the response is streamed, it will indicate to the user that we're getting chunks without displaying the full response.preview: If the response is streamed, it will display the response from the AI model as it being received.silent: We don't display anything to the standard output. Useful in scripts.non_tty: If the standard output is not a terminal, we use this mode.
Screen casts¶
These video examples should show you how the mixin is used by higher-level classes.
Warning
The screen casts are slightly out-of-date: AIPrompt.from_str has
been removed, and you should use the AIPrompt constructor directly
instead.
Normal mode¶
Preview mode¶
Silent mode¶
conatus.models.printing.AIModelPrintingMixin
¶
AIModelPrintingMixin(
config: ModelConfig, *, debug_mode: bool | None = None
)
Mixin for printing messages during the AI model call.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The configuration for the model.
TYPE:
|
debug_mode
|
Whether to run in debug mode.
TYPE:
|
Source code in conatus/models/printing.py
stdout_mode
class-attribute
instance-attribute
¶
stdout_mode: Literal[
"silent", "preview", "normal", "non_tty"
] = (stdout_mode or stdout_mode)
The mode for printing the messages.
'normal': Notify the user that we're waiting for a response, and then that we're receiving the response, displaying the number of chunks received so far.
'preview': Preview the response with a fancy output that updates as the response chunks are received. Only works if the response is a stream. Ifpreviewis set and the response is not a stream, it will default to'normal'.
'silent': Do not print anything to the standard output.
Note that if we detect that we are running in a non TTY environment, we
will use a special mode called 'non_tty', unless the user asked for
'silent'.
lines_to_clear
instance-attribute
¶
lines_to_clear: int = 1
The number of lines to clear after printing the message.
This number might increase over time if the message gets long.
last_print_ts
instance-attribute
¶
The timestamp of the last print.
delta_between_prints
instance-attribute
¶
The time delta between prints.
debug_mode
class-attribute
instance-attribute
¶
Whether to run in debug mode.
print_before_sending
¶
print_before_sending(message: str | None = None) -> None
Print a user-facing message before the AI model call.
| PARAMETER | DESCRIPTION |
|---|---|
message
|
The message to print. If not provided, the default message will be used.
TYPE:
|
Source code in conatus/models/printing.py
write_preview_response
¶
write_preview_response(
response: IncompleteAIResponse[Any] | AIResponse[Any],
) -> None
Write the preview response.
This method handles real-time updating of the AI model's response, showing:
- The current message content
- Any tool/function calls
- Token usage statistics
The display is updated in-place using ANSI escape sequences for smooth updates. We also calculate the number of lines to clear after writing the new content, as well as the current terminal width.
Example Output:
Message: I think the best approach would be...
Tool calls: search_database({"query": "user preferences"})
Total tokens: 147
Pro tip: To debug, use time.sleep(1) to see how the preview response
is updated.
| PARAMETER | DESCRIPTION |
|---|---|
response
|
The current incomplete response from the AI model.
TYPE:
|
Source code in conatus/models/printing.py
clean_after_receiving
¶
Clean the terminal after the AI model call.