Free AI Hosting | Free.ai

Host AI models for free. GPU access, API hosting, and cloud deployment.

Nuage hébergé

Utilisez l'infrastructure Free.ai. Configuration zéro, maintenance zéro. Tous les modèles sont pré-chargés et prêts à utiliser via API ou interface utilisateur web.

Disponible dès maintenant

Docker Auto-Hosté

Exécutez nos modèles d'IA open-source sur votre propre matériel. Docker images avec le support GPU, optimisé pour l'inférence.

Self-service

Gestion privée

Serveurs GPU dédiés gérés par nous, déployés dans votre région cloud préférée. Isolation complète des données et SLA personnalisé.

Entreprise

Déploiement autonome

Tous nos modèles sont open-source (Apache 2.0 / MIT). Vous pouvez les exécuter sur votre propre infrastructure GPU:

# Pull and run a model with Docker
docker pull ghcr.io/free-ai/inference:latest
docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest \
  --model qwen2.5-72b --quantization awq
Caractéristiques minimales
  • GPU NVIDIA avec VRAM 24 Go+ (RTX 4090, A5000, A100)
  • CUDA 12.0+ et Docker avec NVIDIA Container Toolkit
  • Système RAM de 16 Go+, stockage de 100 Go+ par modèle
  • Pour les modèles de paramètres 72B : configuration 80GB VRAM (A100) ou multi-GPU

Pourquoi s'auto-hâter?

  • Confidentialité des données — Your data never leaves your servers
  • Pas de limite de taux — Unlimited inference on your hardware
  • Conformité — Meet data residency requirements
  • Personnalisation — Fine-tune models on your data
  • Contrôle des coûts — Fixed hardware costs, no per-token fees
  • Gagné par l'air — Runs fully offline

FAQ

Three options: Cloud Hosted (use our infrastructure, zero setup), Docker Self-Hosted (run models on your own GPU hardware), and Managed Private (dedicated GPU servers managed by us in your preferred region).

You need an NVIDIA GPU with 24GB+ VRAM (RTX 4090, A5000, A100), CUDA 12.0+, Docker with NVIDIA Container Toolkit, 16GB+ system RAM, and 100GB+ storage per model. For 72B parameter models, you need 80GB VRAM or a multi-GPU setup.

Yes. Self-hosted deployments run fully offline once the Docker images and model weights are downloaded. This is ideal for air-gapped environments and sensitive data processing.

Pull our Docker image and run it with GPU support. The command is: docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest --model qwen2.5-72b --quantization awq. The container handles model loading and serves an API endpoint.

All self-hosted models use permissive open-source licenses -- Apache 2.0, MIT, or BSD. You can use them commercially without restrictions. We deliberately exclude models with restrictive licenses like Meta's Llama license.

Managed private hosting gives you dedicated GPU servers in your preferred cloud region, fully managed by our team. We handle setup, patching, model updates, and monitoring. You get full data isolation with an enterprise SLA.

Yes. Since all models are open-source, you can fine-tune them on your own data using standard training frameworks like Hugging Face Transformers. Our Docker images are compatible with popular fine-tuning tools.

Contact our sales team to discuss a trial period. We typically offer a short evaluation period for enterprise prospects to test managed private hosting before committing to a long-term plan.

Cloud hosting uses the standard token-based pricing. Self-hosted is free -- you only pay for your own hardware and electricity. Managed private hosting is priced based on GPU allocation, region, and SLA level.

Yes. You can self-host specific models for high-volume or sensitive workloads while using the Free.ai cloud for everything else. The API format is identical, making it easy to route requests between your infrastructure and ours.

We provide documentation, Docker images, and community support for self-hosted deployments. Managed private hosting includes full technical support, monitoring, and a dedicated account manager.

Cloud hosted is best for teams that want zero maintenance. Self-hosted is ideal for data privacy, compliance, or unlimited usage on your own hardware. Managed private is the best of both worlds -- full data isolation with no operational burden.

Love this tool? Share it!

Noter cette page