service
Agent AI chat service module
Attributes
Classes
AIService
Bases: BaseService
AI Service Layer
Functions
chat_completion_stream
chat_completion_stream(
request: ChatCompletionRequest,
) -> Iterator[str]
OpenAI API compatible streaming chat completion implementation.
Note that for this method to work properly a LangChain monkeypatch needs to be applied. See: https://github.com/langchain-ai/langchain/issues/19185
Parameters:
-
request
(ChatCompletionRequest
) –The request object containing the chat completion parameters.
Yields:
-
str
(str
) –A string representing a chat completion.
Returns:
-
Iterator[str]
–Iterator[str]: An iterator that yields chat completions.
EleanorAIGenerationStats
Bases: BaseDataModel
Attributes
orig_text
class-attribute
instance-attribute
FilteringStreamBuffer
FilteringStreamBuffer(
output_filters: List[GenerationFilter],
stream_buffer_flush_hwm: int = 50,
filter_kwargs: Optional[Dict[str, Any]] = None,
)
Attributes
Functions
check_and_filter
check_and_filter(
generation_index: int, skip_length_check: bool = False
) -> Optional[ChatCompletionChunk]
StreamingOpenAICallback
Bases: BaseCallbackHandler
Attributes
Functions
on_llm_new_token
on_llm_new_token(
token: str,
*,
chunk: Optional[
Union[GenerationChunk, ChatGenerationChunk]
] = None,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any
) -> Any
Run on new LLM token. Only available when streaming is enabled.
Parameters:
-
token
(str
) –The new token.
-
chunk
(GenerationChunk | ChatGenerationChunk
, default:None
) –The new generated chunk,