Traditional AI APIs require you to trust the provider with your data. Your prompts and completions pass through the provider's infrastructure in plaintext — logged, cached, and accessible to operators.
Tresor uses Confidential Computing to ensure that your data is processed inside hardware-isolated enclaves. The key insight: the code runs in a secure enclave where even the server operator cannot access the memory or data being processed.
Your API call hits the Tresor router, which authenticates your API key and selects the optimal provider.
Your request is forwarded to a Trusted Execution Environment (TEE) — a hardware-isolated enclave running on AMD SEV-SNP or Intel TDX. The enclave's memory is encrypted by the CPU. No one — not the cloud provider, not Tresor, not anyone with physical server access — can read or tamper with the data while it's being processed.
Before processing your request, the enclave produces a cryptographic attestation report — a hardware-signed proof that the exact expected code is running in a genuine TEE. This is verified automatically. You can also request a receipt for independent verification.
The completion is streamed back to you over TLS. At no point does your data exist in plaintext outside the enclave boundary.
Tresor routes inference through verified confidential computing providers: