Free AI Hosting | Free.ai
Host AI models for free. GPU access, API hosting, and cloud deployment.
云宿
使用 Free.ai 个基础设施。 零设置, 零维护。 所有模型都经过预加载, 准备通过 API 或 Web UI 使用 。
现在可用docker 自我安装
在您自己的硬件上运行我们的开放源码 AI 模型。 带有 GPU 支持的嵌入图像, 优化为推断 。
自谋服务私人管理
专门由我们管理的GPU服务器,部署在你所偏爱的云区,完全数据隔离和定制的SLA。
企业企业企业企业企业企业企业自我自住部署
我们所有的模型都是开放源码(Apache 2. 0/MIT)。 您可以在自己的 GPU 基础设施上运行 :
# Pull and run a model with Docker
docker pull ghcr.io/free-ai/inference:latest
docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest \
--model qwen2.5-72b --quantization awq
最低要求
- NVIDIA GPU,24GB+VRAM(RTX 4090,A5000,A100)
- CUDA 12.0+和Docker与NVIDIA集装箱工具包
- 16GB+系统内存、100GB+每个模型储存
- 72B参数模型:80GB VRAM(A100)或多GPU设置
为什么自寻死路?
- 数据隐私数据隐私 — Your data never leaves your servers
- 无利率限额 — Unlimited inference on your hardware
- 遵 遵 遵 守 — Meet data residency requirements
- 自定义 — Fine-tune models on your data
- 费用控制 — Fixed hardware costs, no per-token fees
- 空 空 载 — Runs fully offline
FAQ
Three options: Cloud Hosted (use our infrastructure, zero setup), Docker Self-Hosted (run models on your own GPU hardware), and Managed Private (dedicated GPU servers managed by us in your preferred region).
You need an NVIDIA GPU with 24GB+ VRAM (RTX 4090, A5000, A100), CUDA 12.0+, Docker with NVIDIA Container Toolkit, 16GB+ system RAM, and 100GB+ storage per model. For 72B parameter models, you need 80GB VRAM or a multi-GPU setup.
Yes. Self-hosted deployments run fully offline once the Docker images and model weights are downloaded. This is ideal for air-gapped environments and sensitive data processing.
Pull our Docker image and run it with GPU support. The command is: docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest --model qwen2.5-72b --quantization awq. The container handles model loading and serves an API endpoint.
All self-hosted models use permissive open-source licenses -- Apache 2.0, MIT, or BSD. You can use them commercially without restrictions. We deliberately exclude models with restrictive licenses like Meta's Llama license.
Managed private hosting gives you dedicated GPU servers in your preferred cloud region, fully managed by our team. We handle setup, patching, model updates, and monitoring. You get full data isolation with an enterprise SLA.
Yes. Since all models are open-source, you can fine-tune them on your own data using standard training frameworks like Hugging Face Transformers. Our Docker images are compatible with popular fine-tuning tools.
Contact our sales team to discuss a trial period. We typically offer a short evaluation period for enterprise prospects to test managed private hosting before committing to a long-term plan.
Cloud hosting uses the standard token-based pricing. Self-hosted is free -- you only pay for your own hardware and electricity. Managed private hosting is priced based on GPU allocation, region, and SLA level.
Yes. You can self-host specific models for high-volume or sensitive workloads while using the Free.ai cloud for everything else. The API format is identical, making it easy to route requests between your infrastructure and ours.
We provide documentation, Docker images, and community support for self-hosted deployments. Managed private hosting includes full technical support, monitoring, and a dedicated account manager.
Cloud hosted is best for teams that want zero maintenance. Self-hosted is ideal for data privacy, compliance, or unlimited usage on your own hardware. Managed private is the best of both worlds -- full data isolation with no operational burden.