Free AI Hosting | Free.ai
Host AI models for free. GPU access, API hosting, and cloud deployment.
Whakahaua te mātao
Ka whakamahia te hanganga Free.ai. Kāore te whakaritenga, kāore te whakahaerenga. Kua whakarewatia mua ngā tauira katoa, ā, e wātea ana ki te whakamahi mā te API, UI rānei o te rākau.
Tērā neiDocker-hohe
Ka whakahaeretia a tātau tauira pūtake-mātau AI i runga i ōna pūrere ake. Ko ngā whakaahua Docker me te tautoko GPU, e whakawhanake ana mō te whakawhanaketanga.
Waihoki-korePrivate Manahia
Ko ngā pūnaha GPU e whakahaeretia ana e tātau, e whakawāteatia ana i roto i tō tātau rohe mātao e manakohia ana. Ko te whakawāteatanga raraunga me te SLA ā-tūturu.
KaupapaWhakataunga-whakahohe-whakahohe
He pūtake tūwhera a tātau tauira katoa (Apache 2.0 / MIT). Ka taea e koe te whakahaere i a rātau i runga i tō tātau ake hanganga GPU:
# Pull and run a model with Docker
docker pull ghcr.io/free-ai/inference:latest
docker run --gpus all -p 8000:8000 ghcr.io/free-ai/inference:latest \
--model qwen2.5-72b --quantization awq
Rerekētanga iti rawa
- NVIDIA GPU me te 24GB+ VRAM (RTX 4090, A5000, A100)
- CUDA 12.0+ me Docker me NVIDIA Container Toolkit
- 16GB+ RAM pūnaha, 100GB+ te penapena i ia tauira
- Mō ngā tauira tohuāhua 72B: 80GB VRAM (A100), whakatū-maha rānei o GPU
He aha te Self-Host?
- Ko te ātetetanga o te raraunga — Your data never leaves your servers
- Kāore he tepe ine — Unlimited inference on your hardware
- Whakarite — Meet data residency requirements
- Whakawhanaungatanga — Fine-tune models on your data
- Ka whakahaeretia te utu — Fixed hardware costs, no per-token fees
- He āhua-āhua — Runs fully offline