What Is This Vulnerability
Leaked LLM API Keys is a critical vulnerability where API keys for Large Language Model providers — such as OpenAI, Anthropic, Cohere, Google AI, or Replicate — are embedded in client-side JavaScript bundles. These keys provide direct access to paid AI services and are billed per token. Unlike Firebase API keys which are designed to be public, LLM API keys are strictly secret and must never appear in client-side code.
This vulnerability is increasingly common as developers rapidly integrate AI features without proper backend architecture, calling LLM APIs directly from the browser or frontend framework.
Why It's Dangerous
Exposed LLM API keys create severe financial and security risks:
- Unlimited financial liability — attackers can generate millions of tokens, resulting in bills of thousands or tens of thousands of dollars within hours.
- Model abuse — your API key can be used to generate harmful, illegal, or policy-violating content, with the activity traced back to your account.
- Rate limit exhaustion — legitimate users are blocked when attackers consume your rate limits.
- Data exfiltration via prompts — if your key has access to fine-tuned models or assistants with retrieval, attackers can extract your proprietary training data.
- Account suspension — LLM providers may suspend your account due to policy violations committed using your key.
OpenAI keys (sk-...) and Anthropic keys (sk-ant-...) are the most commonly leaked, but any LLM provider key in client code is equally dangerous.
How to Detect
Search JavaScript bundles and source maps for LLM API key patterns:
// Patterns to search for in client-side bundles
// OpenAI
const openaiPattern = /sk-[a-zA-Z0-9]{20,}/;
// Anthropic
const anthropicPattern = /sk-ant-[a-zA-Z0-9-]{80,}/;
// Google AI / Gemini
const googleAIPattern = /AIzaSy[0-9A-Za-z_-]{33}/;
// Cohere
const coherePattern = /[a-zA-Z0-9]{40}/; // Less distinctive, check context
// Check in browser DevTools
// Sources tab > Search across all files > "sk-" or "sk-ant-"
// Network tab > Filter XHR > Look for api.openai.com or api.anthropic.com calls
If the browser makes direct requests to api.openai.com, api.anthropic.com, or similar endpoints, the key is exposed regardless of obfuscation. AuditYour.app scans JavaScript bundles and monitors network calls to detect direct LLM API usage from client code.
How to Fix
Never call LLM APIs from the client. Create a backend proxy:
// Next.js API Route — /app/api/chat/route.ts
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY, // Server-side only
});
export async function POST(request: Request) {
const { message } = await request.json();
// Validate and rate-limit the request
const session = await getServerSession();
if (!session) return Response.json({ error: 'Unauthorized' }, { status: 401 });
await checkRateLimit(session.user.id, { maxRequests: 20, windowMs: 60000 });
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: message }],
});
return Response.json({ response: response.content });
}
If you have already leaked a key:
- Rotate immediately — generate a new key in your LLM provider dashboard and revoke the old one.
- Check billing — review usage logs for unauthorized consumption and contact the provider to dispute fraudulent charges.
- Set spending limits — configure monthly spend caps in your provider's billing settings.
- Implement monitoring — set up alerts for unusual usage spikes.
Scan your app for this vulnerability
AuditYourApp automatically detects security misconfigurations in Supabase and Firebase projects. Get actionable remediation in minutes.
Run Free Scan