API Reference

Chat Completions API

Overview of Jan’s OpenAI-compatible LLM endpoints.

Chat Completions API

Jan exposes an OpenAI-compatible Chat Completions surface so existing SDKs and integrations continue to work unchanged.

  • Base URL: http://localhost:8000/llm
  • Authentication: Use the same bearer tokens or API keys described in the Authentication docs.
  • Response shape: Mirrors OpenAI’s schema, including tool/function calling payloads and streaming chunks.

When to use it

Reach for this API whenever you need conversational reasoning, structured tool output, or access to enabled models.

Both endpoints accept the same payloads as OpenAI, so you can point your client at Jan by just swapping the base URL and API key.

Typical workflow

  1. Call GET /v1/models to understand which providers/models are enabled.
  2. Send POST /v1/chat/completions with your conversation, optionally enabling tools/functions.
  3. Stream responses by setting stream: true—Jan maintains full compatibility with OpenAI’s SSE format.

Need help troubleshooting? See Debugging Requests for guidance.