Create a chat completion. Fully OpenAI-compatible — use the same request format you would with OpenAI's API.
curl https://api.trytresor.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-oss-120b",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
| Parameter | Type | Required | Description |
|---|
model | string | Yes | Model identifier (e.g. gpt-oss-120b). See Models. |
messages | array | Yes | Array of message objects with role and content. |
stream | boolean | No | Enable streaming via SSE. Default: false. |
temperature | number | No | Sampling temperature (0–2). Default: 1. |
max_tokens | integer | No | Maximum tokens to generate. |
region | string | No | Preferred inference region (e.g. eu). Tresor extension. |
| Field | Type | Description |
|---|
role | string | One of system, user, or assistant. |
content | string | The message text. |
| Header | Description |
|---|
X-Tresor-Receipt | Set to true to receive a signed receipt. |
X-Tresor-Nonce | Client nonce for replay protection (included in receipt). |
{
"id": "chatcmpl-abc",
"object": "chat.completion",
"model": "gpt-oss-120b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 7,
"total_tokens": 16
}
}
When stream: true, the response is sent as Server-Sent Events (SSE). Each event contains a JSON chunk:
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"gpt-oss-120b","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]