Documentation
Get started with Keymesh in under 5 minutes.
1 Quick Start
Create a Virtual Key
Log into your dashboard,
go to API Keys, and create a new virtual key.
Your key will look like: km_live_abc123...
Add Your Provider Key
Go to Integrations and add your OpenAI or Anthropic API key. Keys are encrypted at rest and never logged.
Update Your Code
Change your base URL to https://proxy.keymesh.dev
and use your Keymesh key. That's it!
Migrate with AI
Copy this prompt and paste it into Claude, ChatGPT, or Cursor along with your code.
I want to migrate my existing AI API code to use Keymesh. Keymesh is a proxy that lets me use virtual API keys with budget limits and usage tracking.
Here's what I need you to do:
1. Update my base URL to https://proxy.keymesh.dev/v1 (for OpenAI) or https://proxy.keymesh.dev (for Anthropic)
2. Replace my API key with my Keymesh virtual key (format: km_live_...)
3. Keep everything else the same - Keymesh is a transparent proxy
## HTTP Requests (curl, fetch, axios)
cURL:
```bash
curl https://proxy.keymesh.dev/v1/chat/completions \
-H "Authorization: Bearer km_live_your_key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'
```
JavaScript fetch:
```javascript
const response = await fetch('https://proxy.keymesh.dev/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer km_live_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({ model: 'gpt-4o', messages: [...] }),
});
```
Axios:
```javascript
const response = await axios.post('https://proxy.keymesh.dev/v1/chat/completions',
{ model: 'gpt-4o', messages: [...] },
{ headers: { 'Authorization': 'Bearer km_live_your_key' } }
);
```
## SDK Examples
OpenAI Python:
```python
client = OpenAI(api_key="km_live_your_key", base_url="https://proxy.keymesh.dev/v1")
```
OpenAI TypeScript:
```typescript
const client = new OpenAI({ apiKey: "km_live_your_key", baseURL: "https://proxy.keymesh.dev/v1" });
```
Anthropic Python:
```python
client = anthropic.Anthropic(api_key="km_live_your_key", base_url="https://proxy.keymesh.dev")
```
## Key Points
- OpenAI endpoints: https://proxy.keymesh.dev/v1/...
- Anthropic endpoints: https://proxy.keymesh.dev/v1/messages (no /v1 prefix for SDK)
- All keys start with: km_live_
- Streaming works the same way
Please analyze my code and show me exactly what changes I need to make to migrate to Keymesh.2 Code Examples
curl https://proxy.keymesh.dev/v1/chat/completions \-H "Authorization: Bearer km_live_your_key" \-H "Content-Type: application/json" \-d '{"model": "gpt-4o","messages": [{"role": "user", "content": "Hello!"}]}'
from openai import OpenAIclient = OpenAI(api_key="km_live_your_key",base_url="https://proxy.keymesh.dev/v1")response = client.chat.completions.create(model="gpt-4o",messages=[{"role": "user", "content": "Hello!"}])print(response.choices[0].message.content)
import OpenAI from "openai";const client = new OpenAI({apiKey: "km_live_your_key",baseURL: "https://proxy.keymesh.dev/v1"});const response = await client.chat.completions.create({model: "gpt-4o",messages: [{ role: "user", content: "Hello!" }]});console.log(response.choices[0].message.content);
import anthropicclient = anthropic.Anthropic(api_key="km_live_your_key",base_url="https://proxy.keymesh.dev")message = client.messages.create(model="claude-sonnet-4-20250514",max_tokens=1024,messages=[{"role": "user", "content": "Hello!"}])print(message.content)
3 Configuration
Budget Limits
Set a maximum spend per virtual key. Requests beyond the limit return 402.
- No limit — Unlimited spending (default)
- Fixed budget — Key stops when exhausted
- Auto-reset — Daily, weekly, or monthly
Supported Providers
OpenAI
Anthropic
4 Error Codes
| Code | Meaning |
|---|---|
401 | Invalid API key |
402 | Budget exceeded |
403 | Key revoked |
429 | Rate limited |
502 | Provider error |
5 FAQ
Does Keymesh add latency?
Minimal. We run on Cloudflare's edge network, adding only 10-30ms. Streaming responses flow directly through.
Is my provider API key secure?
Yes. Keys are encrypted at rest with AES-256-GCM, never cached in Redis, and never logged. We don't store prompts or responses.
Do you support streaming?
Yes! Full SSE streaming support for OpenAI and Anthropic. Token usage is calculated from the stream for accurate costs.
What happens when budget is exceeded?
We check budget before forwarding the request. You get a 402 error instantly—the request never reaches the provider, so you're not charged.