How to Detect Unauthorized Usage of Your AI APIs

"My Startup Got a $14,000 OpenAI Bill Overnight"

I'll never forget the panic in my friend's voice when he called me at 3 AM last month. His Y Combinator startup woke up to a $14,000 OpenAI charge - someone had scraped their API key from a public GitHub commit and was running massive GPT-4 inference jobs.

This isn't rare. In 2024 alone, we've seen: - A developer accidentally commit a Claude API key, leading to $8,700 in unauthorized Anthropic charges - A Google Gemini key exposed in a Jupyter notebook, resulting in account suspension - Hundreds of Mistral and Cohere keys leaked in public Dockerfiles

Why AI APIs Are High-Risk Targets

Unlike traditional APIs, AI services like OpenAI and Anthropic Claude: 1. Cost exponentially more per call (GPT-4 Turbo can be $0.06/1K tokens) 2. Have no automatic spending limits by default 3. Get scraped constantly by bots hunting for exposed keys

Here's what secure vs insecure API usage looks like in Python:

# UNSAFE (key hardcoded in script)
import openai
openai.api_key = "sk-your-key-here" # 🚨 Never do this!

# SAFE (key loaded from environment)
from dotenv import load_dotenv
import os
import openai

load_dotenv() # Load from.env file
openai.api_key = os.getenv("OPENAI_API_KEY") # ✅ Best practice

5 Ways to Detect Unauthorized API Usage

1. Monitor Usage Metrics Daily

All major AI providers offer usage dashboards: - OpenAI: Check your Usage Limits page - Anthropic: View Organization Metrics - Google Gemini: Use Cloud Monitoring

Set up alerts for: - Unusual request spikes (e.g., 3AM traffic when your team sleeps) - Geographic anomalies (requests from countries you don't operate in) - Model tier jumps (sudden GPT-4 usage if you only use GPT-3.5)

2. Implement Rate Limiting

Add this to your API middleware:

from fastapi import FastAPI, Request
from slowapi import Limiter
from slowapi.util import get_remote_address

limiter = Limiter(key_func=get_remote_address)
app = FastAPI()

@app.post("/chat")
@limiter.limit("50/hour") # Adjust based on your needs
async def chat_completion(request: Request):
 # Your AI logic here

3. Rotate Keys Monthly

Treat API keys like passwords: - Anthropic and Mistral allow unlimited active keys - OpenAI permits up to 5 active keys per account - Google Gemini keys can be cycled via Cloud Console

Tools like Leaked.now scan GitHub 24/7 to detect exposed keys before attackers find them - I run it weekly for my team.

4. Audit Your GitHub History

Run this command to check for accidental commits:

# Search git history for API key patterns
git log -p | grep -E 'sk-[a-zA-Z0-9]{20,}|claude-[a-zA-Z0-9]{40}|AIzaSy[a-zA-Z0-9_-]{33}'

5. Set Up Budget Alerts

For Google Gemini:

gcloud budgets create --display-name="Gemini Budget" \
 --budget-amount=1000 USD \
 --threshold-rule=percent=50 \
 --threshold-rule=percent=90 \
 --filter=project:your-project-id

Immediate Actions If You Suspect Abuse

  1. Rotate all keys immediately (don't just disable - attackers may have copied them)
  2. Check your billing for unexpected model usage (e.g., GPT-4 instead of GPT-3.5)
  3. Contact support - OpenAI and Anthropic often forgive first-time incidents
  4. Review access logs for suspicious IPs (especially cloud providers like AWS, Azure)

Key Takeaways

Never hardcode API keys - use environment variables or secret managers ✅ Monitor usage daily with provider dashboards and custom alerts ✅ Rotate keys monthly and audit GitHub history regularly ✅ Use tools like Leaked.now to detect exposed keys ✅ Set budget caps before you learn the hard way

The scary truth? Most teams discover API leaks only after getting massive bills. By implementing these practices today, you'll sleep better knowing your AI access is secure.


Don't Let Your API Keys Get Stolen

Every day, hundreds of API keys are leaked on GitHub. Leaked.now helps you find and secure exposed credentials before attackers do.

🔐 Monitors OpenAI, Claude, Gemini, Mistral & more 🚨 Instant alerts when keys are found 📧 Responsible disclosure to protect developers

Start Monitoring Free →