Free.ai API

One API key. Every AI tool. Simple token billing.

How It Works

1
Get an API Key

Sign up free. Your key starts with sk-free-

2
Call Any Endpoint

Chat, images, TTS, STT, music, translation — all one API

3
Pay in Tokens

One balance. Every tool costs tokens. Simple.

Quick Start

# Chat with AI
curl -X POST https://api.free.ai/v1/chat/ \
  -H "Authorization: Bearer sk-free-YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello!"}],
    "model": "qwen7b"
  }'

# Generate an image
curl -X POST https://api.free.ai/v1/image/generate/ \
  -H "Authorization: Bearer sk-free-YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "A sunset over mountains", "model": "flux-schnell"}'

# Text to speech
curl -X POST https://api.free.ai/v1/tts/ \
  -H "Authorization: Bearer sk-free-YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello world", "voice": "default", "model": "kokoro"}'

# Translate text
curl -X POST https://api.free.ai/v1/translate/ \
  -H "Authorization: Bearer sk-free-YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello world", "target": "es"}'
import requests

API_KEY = "sk-free-YOUR_KEY"
BASE = "https://api.free.ai"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}

# Chat
r = requests.post(f"{BASE}/v1/chat/", headers=HEADERS, json={
    "messages": [{"role": "user", "content": "Hello!"}],
    "model": "qwen7b"  # or "openai/gpt-4o", "anthropic/claude-sonnet-4", etc.
})
print(r.json()["choices"][0]["message"]["content"])

# Generate image
r = requests.post(f"{BASE}/v1/image/generate/", headers=HEADERS, json={
    "prompt": "A sunset over mountains",
    "model": "flux-schnell",
    "aspect_ratio": "16:9"
})
print(r.json()["image_url"])

# Text to speech
r = requests.post(f"{BASE}/v1/tts/", headers=HEADERS, json={
    "text": "Hello world",
    "model": "kokoro",
    "voice": "af_heart"
})
print(r.json()["audio_url"])

# Transcribe audio
r = requests.post(f"{BASE}/v1/stt/transcribe/", headers=HEADERS, json={
    "url": "https://example.com/audio.mp3",
    "model": "whisper"
})
print(r.json()["text"])
const API_KEY = "sk-free-YOUR_KEY";
const BASE = "https://api.free.ai";

// Chat
const chat = await fetch(`${BASE}/v1/chat/`, {
  method: "POST",
  headers: { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json" },
  body: JSON.stringify({
    messages: [{ role: "user", content: "Hello!" }],
    model: "qwen7b"
  })
});
const data = await chat.json();
console.log(data.choices[0].message.content);

// Generate image
const img = await fetch(`${BASE}/v1/image/generate/`, {
  method: "POST",
  headers: { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json" },
  body: JSON.stringify({ prompt: "A sunset over mountains", model: "flux-schnell" })
});
console.log((await img.json()).image_url);

Token Pricing

Everything costs tokens. One balance for all tools. Same pricing whether you use the API or the website.

Self-Hosted Models Cheapest
ModelTypeToken CostLicense
Qwen 2.5 7BChat/Write/CodeActual tokens used (input+output)Apache 2.0
FLUX.1 SchnellImage Generation1,000 tokens/imageApache 2.0
KokoroText to Speech1 token per 4 charsApache 2.0
faster-whisperSpeech to Text4 tokens/second of audioMIT
AudioLDM 2Music Generation2,000 tokens/trackApache 2.0
MadLAD-400Translation (450+ langs)Actual tokens usedApache 2.0
Real-ESRGANImage Upscaling500 tokens/imageBSD
BRIA RMBGBackground Removal500 tokens/imageApache 2.0
CogVideoXVideo Generation5,000 tokens/videoApache 2.0
DemucsVocal Separation500 tokens/trackMIT
External Models 346+ models

Access GPT-4, Claude, Gemini, Llama, DeepSeek, and 340+ more models. Token cost = OpenRouter price converted to our tokens with 30% markup.

ModelProvider~Tokens per messageNotes
GPT-4o MiniOpenAI~20Cheap, fast
Gemini 2.0 FlashGoogle~15Very fast
Mistral NemoMistral~10Great value
DeepSeek V3DeepSeek~30Strong reasoning
Llama 3.3 70BMeta~25Open weights
Claude Sonnet 4Anthropic~400Premium quality
GPT-4oOpenAI~325Premium quality
Qwen 2.5 72BAlibaba~40Large, capable

Full list of 346+ models at /apps/. All use the same /v1/chat/ endpoint — just change the model parameter.

Token Formula

Self-hosted models: You pay the exact tokens used. No markup.

OpenRouter models: our_tokens = openrouter_usd_cost × 100,000 × 1.30

Example: GPT-4o costs $0.0025 per 1K prompt tokens on OpenRouter. For 1,000 tokens: $0.0025 × 100,000 × 1.30 = 325 tokens from your balance.

All Endpoints

Chat / LLM
POST /v1/chat/Chat with any model (self-hosted or external). Streaming supported.
Image
POST /v1/image/generate/Text to image (FLUX, SDXL)
POST /v1/image/edit/Inpaint, outpaint, style transfer
POST /v1/image/enhance/Upscale 2x/4x (Real-ESRGAN)
POST /v1/image/remove-bg/Remove background (BRIA RMBG)
Video
POST /v1/video/generate/Text/image to video (CogVideoX)
Text to Speech
POST /v1/tts/Generate speech (Kokoro, Piper, MeloTTS, Chatterbox)
POST /v1/tts/stream/Streaming TTS (real-time audio chunks)
Speech to Text
POST /v1/stt/transcribe/Transcribe audio/video (faster-whisper, 99 languages)
Music & Audio
POST /v1/music/generate/Generate music from text description
POST /v1/music/separate/Separate vocals/stems (Demucs)
Text Tools
POST /v1/write/Generate content (essay, email, story, etc.)
POST /v1/code/generate/Generate code in any language
POST /v1/summarize/Summarize text
POST /v1/humanize/Make AI text sound human
POST /v1/detect/Detect AI-generated content
Translation & OCR
POST /v1/translate/Translate text (MadLAD-400, 450+ languages)
POST /v1/ocr/Extract text from images
Utility
GET /v1/modelsList all available models (self-hosted + external)
GET /v1/status/{job_id}/Check async job status
GET /healthAPI health check

Authentication

Include your API key in the Authorization header:

Authorization: Bearer sk-free-YOUR_API_KEY

Every response includes a free_ai_usage block showing tokens used:

{
  "choices": [...],
  "free_ai_usage": {
    "tokens_used": 142,      // actual tokens processed
    "tokens_charged": 142,   // tokens deducted from your balance
    "source": "self_hosted", // or "openrouter"
    "model": "qwen7b"
  }
}

Rate Limits & Plans

Same token pricing on the website and API. No separate API pricing.

PlanTokens/MonthAPI Requests/MinPrice
Free50K/day (pool)10$0
Basic200K30$5/mo
Pro1M60$19/mo
Business5M120$49/mo
EnterpriseCustomCustomContact

Token packs available: 200K/$5, 1M/$15, 5M/$40. Tokens never expire.

Python SDK & CLI

Python SDK

Access every AI tool from your Python code.

pip install free-dot-ai
from freeai import FreeAI

ai = FreeAI(api_key="sk-free-xxx")

# Chat
response = ai.chat("What is Python?")
print(response.text)

# Image generation
image = ai.image("A sunset over mountains")
image.save("sunset.png")

# Text to speech
audio = ai.tts("Hello world", voice="af_heart")
audio.save("hello.mp3")

# Translation
result = ai.translate("Hello", to="es")
print(result.text)  # "Hola"
GitHub PyPI
CLI Coding Assistant

Free, open-source alternative to Claude Code, Cursor, and GitHub Copilot.

pip install free-dot-ai-code
# Start a coding session
cd your-project/
free-code

# Ask about your codebase
free-code ask "How does auth work?"

# Execute a task
free-code run "Add unit tests for User model"

50K free tokens/day. BYOK supported. 346+ models. Session sync with Web IDE.

GitHub PyPI Web IDE

BYOK (Bring Your Own Key)

Use your own API keys from any provider. Zero markup, zero fees. Free.ai just proxies the request.

ProviderKey FormatModelsMarkup
OpenAIsk-proj-xxxGPT-4o, GPT-4o Mini, o1, o3, etc.$0
Anthropicsk-ant-xxxClaude Sonnet 4, Opus 4, Haiku, etc.$0
GoogleAIzaSyxxxGemini 2.5 Pro, Flash, etc.$0
OpenRoutersk-or-xxx346+ models from all providers$0
# Python SDK with BYOK
from freeai import FreeAI

ai = FreeAI(provider="openai", api_key="sk-proj-xxx")
response = ai.chat("Hello", model="gpt-4o")

# CLI with BYOK
# free-code config set provider openai
# free-code config set api_key sk-proj-xxx

Your key, your usage, your bill. No logging. No token deductions from your Free.ai balance.

API FAQ

Yes! Free accounts get 50K tokens/day. That's enough for hundreds of API calls. Paid plans offer more tokens and higher rate limits.

No! Same tokens, same pricing. Your token balance is shared between the website and API. Use either, pay the same.

Same endpoint, just change the model parameter. For example: "model": "openai/gpt-4o" or "model": "anthropic/claude-sonnet-4". Full list at /apps/ or GET /v1/models.

Yes! The /v1/chat/ endpoint follows the OpenAI chat completions format. You can use any OpenAI-compatible SDK — just change the base URL to https://api.free.ai and use your Free.ai API key.

Yes! Set "stream": true in your chat request. Responses are delivered via Server-Sent Events (SSE).

You'll get a 402 response with an error message. Buy more tokens at /pricing/ or wait for your daily free pool to reset. Self-hosted models are always available within daily limits.

Yes! All self-hosted models are MIT/Apache 2.0 licensed. Generated content is yours for commercial use.

Self-hosted: runs on our GPU. Cheapest, fastest, most private. OpenRouter: proxied to external providers. Access to GPT-4, Claude, Gemini, etc. Costs more tokens due to external API fees.

Visit your account page at /account/ or check the free_ai_usage.tokens_charged field in each API response.

Yes! Install our Python SDK: pip install free-dot-ai. It wraps every endpoint with typed responses. For coding assistance, install pip install free-dot-ai-code. The API also follows OpenAI's format, so you can use the openai Python/Node SDK with our base URL. GitHub

We aim for 99.9% uptime. Enterprise plans include SLA guarantees. Check /health for real-time status.

Email hello@free.ai or visit /contact/. Pro+ plans get priority support. Error responses include an error_id for debugging.

Like this tool? Share it!

Gefa þessari síðu einkunn