Chat Manager
The ChatManager
class is responsible for handling chat interactions between agents and language models. It provides a unified interface for processing messages, managing chat state, and handling different types of chat engines.
Class Overview
from yosrai.core import ChatManager, LLM
from yosrai.utils.config import Config
chat_manager = ChatManager(
config=Config(),
engine_type="default",
llm=LLM(provider="openai", model="gpt-3.5-turbo")
)
Constructor Parameters
config
(Config, optional): Configuration settings for the chat managerengine_type
(Union[EngineType, str], optional): The type of chat engine to usellm
(LLM, optional): Language model configuration- Additional keyword arguments for LLM configuration
Key Methods
process()
async def process(
self,
input,
assistant_name: str = None,
streaming: bool = False,
concatenate: bool = True,
message_type: MessageType = MessageType.AI,
streaming_callback: Callable = None,
messaging_console: MessagingConsoleBaseClass = None,
tools: list = None
)
Processes chat messages and generates responses. This is the main method for handling chat interactions.
Parameters:
input
: The input message or messages to processassistant_name
: Name of the assistant (defaults to provider and model name)streaming
: Whether to stream the responseconcatenate
: Whether to concatenate multiple messagesmessage_type
: Type of message (AI, Human, System, etc.)streaming_callback
: Callback function for streaming responsesmessaging_console
: Console for message displaytools
: List of tools available for the chat
get_chat_model()
Returns the underlying chat model with optional tool configuration.
Properties
chat_object
Returns the underlying chat object implementation.
chat_model
Returns the current chat model instance.
Usage Examples
Basic Chat Processing
from yosrai.core import ChatManager
# Create chat manager
chat_manager = ChatManager()
# Process a message
response = await chat_manager.process(
input="Hello, how are you?",
assistant_name="AI Assistant"
)
Streaming Chat
async def stream_callback(chunk):
print(chunk, end="")
response = await chat_manager.process(
input="Tell me a story",
streaming=True,
streaming_callback=stream_callback
)
Using Tools
tools = [
{"name": "calculator", "func": calculate},
{"name": "weather", "func": get_weather}
]
response = await chat_manager.process(
input="What's the weather like?",
tools=tools
)
Engine Types
The ChatManager supports different types of chat engines through the engine_type
parameter:
"default"
: Standard chat processing"streaming"
: Optimized for streaming responses"parallel"
: Supports parallel message processing
Integration with Messaging Console
The ChatManager can integrate with custom messaging consoles:
from yosrai.abstract import MessagingConsoleBaseClass
class CustomConsole(MessagingConsoleBaseClass):
def display_message(self, message):
# Custom display logic
pass
console = CustomConsole()
response = await chat_manager.process(
input="Hello",
messaging_console=console
)
Error Handling
The ChatManager includes built-in error handling for common scenarios:
- Invalid message formats
- LLM API errors
- Tool execution errors
- Stream interruptions
Errors are propagated through the async interface and can be caught using standard try/except blocks.