Free AI Hosting | Free.ai
Host AI models for free. GPU access, API hosting, and cloud deployment.
클라우드 호스팅
Free.ai 인프라 사용. 설치 및 유지보수 없음. 모든 모델은 사전 로드되어 있으며 API 또는 웹 UI를 통해 사용할 준비가 되어 있습니다.
지금 사용 가능도커 셀프 호스팅
자체 하드웨어에서 오픈 소스 AI 모델을 실행합니다. 추론에 최적화된 GPU 지원을 갖춘 Docker 이미지.
셀프 서비스관리형 개인
선호하는 클라우드 지역에 배포된 전용 GPU 서버를 관리합니다. 완전한 데이터 격리 및 맞춤형 SLA.
엔터프라이즈자체 호스팅 배포
모든 모델은 오픈 소스입니다(Apache 2.0 / MIT). 자체 GPU 인프라에서 실행할 수 있습니다.
# Pull and run a model with Docker
docker pull ghcr.io/free-ai/inference:latest
docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest \
--model qwen2.5-72b --quantization awq
최소 요구 사항
- 24GB 이상 VRAM이 있는 NVIDIA GPU(RTX 4090, A5000, A100)
- CUDA 12.0+ 및 NVIDIA Container Toolkit을 사용한 Docker
- 모델당 16GB+ 시스템 RAM, 100GB+ 스토리지
- 72B 파라미터 모델: 80GB VRAM(A100) 또는 멀티 GPU 설정
왜 셀프 호스트?
- 데이터 개인 정보 보호 — Your data never leaves your servers
- 속도 제한 없음 — Unlimited inference on your hardware
- 규정 준수 — Meet data residency requirements
- 사용자 정의 — Fine-tune models on your data
- 비용 제어 — Fixed hardware costs, no per-token fees
- 공기 간격 — Runs fully offline
FAQ
Three options: Cloud Hosted (use our infrastructure, zero setup), Docker Self-Hosted (run models on your own GPU hardware), and Managed Private (dedicated GPU servers managed by us in your preferred region).
You need an NVIDIA GPU with 24GB+ VRAM (RTX 4090, A5000, A100), CUDA 12.0+, Docker with NVIDIA Container Toolkit, 16GB+ system RAM, and 100GB+ storage per model. For 72B parameter models, you need 80GB VRAM or a multi-GPU setup.
Yes. Self-hosted deployments run fully offline once the Docker images and model weights are downloaded. This is ideal for air-gapped environments and sensitive data processing.
Pull our Docker image and run it with GPU support. The command is: docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest --model qwen2.5-72b --quantization awq. The container handles model loading and serves an API endpoint.
All self-hosted models use permissive open-source licenses -- Apache 2.0, MIT, or BSD. You can use them commercially without restrictions. We deliberately exclude models with restrictive licenses like Meta's Llama license.
Managed private hosting gives you dedicated GPU servers in your preferred cloud region, fully managed by our team. We handle setup, patching, model updates, and monitoring. You get full data isolation with an enterprise SLA.
Yes. Since all models are open-source, you can fine-tune them on your own data using standard training frameworks like Hugging Face Transformers. Our Docker images are compatible with popular fine-tuning tools.
Contact our sales team to discuss a trial period. We typically offer a short evaluation period for enterprise prospects to test managed private hosting before committing to a long-term plan.
Cloud hosting uses the standard token-based pricing. Self-hosted is free -- you only pay for your own hardware and electricity. Managed private hosting is priced based on GPU allocation, region, and SLA level.
Yes. You can self-host specific models for high-volume or sensitive workloads while using the Free.ai cloud for everything else. The API format is identical, making it easy to route requests between your infrastructure and ours.
We provide documentation, Docker images, and community support for self-hosted deployments. Managed private hosting includes full technical support, monitoring, and a dedicated account manager.
Cloud hosted is best for teams that want zero maintenance. Self-hosted is ideal for data privacy, compliance, or unlimited usage on your own hardware. Managed private is the best of both worlds -- full data isolation with no operational burden.