Skip to content

openai_chat

OpenAI-Compatible Chat API.

Provides API endpoints for chat completions using the OpenAI API ChatCompletions specification.

Attributes

RESOURCE_CHAT module-attribute

RESOURCE_CHAT = 'Chat'

chat_router module-attribute

chat_router = APIRouter(
    prefix=format_path(RESOURCE_CHAT),
    tags=[RESOURCE_CHAT],
    responses=DEFAULT_HTTP_ERROR_RESPONSES,
)

Classes

Functions

chat_completion async

chat_completion(
    request: ChatCompletionRequest,
    sa_session: SASession = Depends(db_session_dependency),
    neo4j_session: NJSession = Depends(
        neo4j_session_dependency
    ),
    job_pool: ThreadPoolExecutor = Depends(
        job_pool_dependency
    ),
)

OpenAI API compatible chat completion endpoint

This endpoint supports human/AI, AI/AI, as well as group combinations.

The participants, session, and responding agent will be auto-detected before the completion is generated. An optional system message can be sent to profile hints to this process. This message will be removed from the chat and will never be seen by the LLM.

{
    "role": "system",
    "content": "ELEANOR_SYSTEM {\"source":\"...\",\"user\":\"...\",\"char\":\"...\",\"session\":\"...\"}"
}
Field Description
source A freeform string that identifies the client, helpful for debugging
user The user identifier
char The character identifier
session The session identifier

All fields are optional and the framework will use a best-effort approach to determine the correct information.

User and Agent participants identified in the request must exist in the framework or an error will be raised.

Streaming is supported.

Server Configuration

WIP

Client Configuration Recommendations
  • The model name send by the client is the desired Eleanor framework namespace name
  • Pass human / AI names in messages, see name field in the OpenAI API specification for more details
  • Disable client-side RAG systems