Chat Completions

Create chat completions with streaming support via SSE.

POST /v1/chat/completions

Create a chat completion. Fully OpenAI-compatible — use the same request format you would with OpenAI's API.

Request

curl https://api.trytresor.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-oss-120b",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

Request body

ParameterTypeRequiredDescription
modelstringYesModel identifier (e.g. gpt-oss-120b). See Models.
messagesarrayYesArray of message objects with role and content.
streambooleanNoEnable streaming via SSE. Default: false.
temperaturenumberNoSampling temperature (0–2). Default: 1.
max_tokensintegerNoMaximum tokens to generate.
regionstringNoPreferred inference region (e.g. eu). Tresor extension.

Message object

FieldTypeDescription
rolestringOne of system, user, or assistant.
contentstringThe message text.

Tresor-specific headers

HeaderDescription
X-Tresor-ReceiptSet to true to receive a signed receipt.
X-Tresor-NonceClient nonce for replay protection (included in receipt).

Response (non-streaming)

{
  "id": "chatcmpl-abc",
  "object": "chat.completion",
  "model": "gpt-oss-120b",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 7,
    "total_tokens": 16
  }
}

Streaming response

When stream: true, the response is sent as Server-Sent Events (SSE). Each event contains a JSON chunk:

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]