Chat Completions API
Overview of Jan’s OpenAI-compatible LLM endpoints.
Chat Completions API
Jan exposes an OpenAI-compatible Chat Completions surface so existing SDKs and integrations continue to work unchanged.
- Base URL:
http://localhost:8000/llm - Authentication: Use the same bearer tokens or API keys described in the Authentication docs.
- Response shape: Mirrors OpenAI’s schema, including tool/function calling payloads and streaming chunks.
When to use it
Reach for this API whenever you need conversational reasoning, structured tool output, or access to enabled models.
- Discover available models:
GET /v1/models - Generate chat responses:
POST /v1/chat/completions
Both endpoints accept the same payloads as OpenAI, so you can point your client at Jan by just swapping the base URL and API key.
Typical workflow
- Call
GET /v1/modelsto understand which providers/models are enabled. - Send
POST /v1/chat/completionswith your conversation, optionally enabling tools/functions. - Stream responses by setting
stream: true—Jan maintains full compatibility with OpenAI’s SSE format.
Need help troubleshooting? See Debugging Requests for guidance.
Delete API key DELETE
Revokes and deletes an API key by ID. Deleted keys can no longer be used for authentication.
Get a conversation item GET
Retrieve a single item from a conversation by item ID **Features:** - Retrieve specific item by ID - Returns complete item with all content - Automatic ownership verification via conversation - Optional include parameter for additional fields **Response Fields:** - `id`: Item ID with `msg_` prefix - `type`: Item type (message, tool_call, etc.) - `role`: Role for message items (user, assistant) - `content`: Item content array - `status`: Item status (completed, incomplete, etc.) - `created_at`: Unix timestamp