AI Movie Generator

Whakama ā-pūnaha OK 380+ tauira Kāore he tohu wai Kāore he kōwhiringa e hiahiatia ana
Kāhua:
+ GPT-5, Claude, Gemini
Type a story idea. An AI director writes a multi-scene script, renders each scene as a short video clip, then stitches them into one cohesive cut. Best for ~3-5 scene shorts (10-30 seconds total). Cogvideox is free; hunyuan-video is sharper but takes longer.
2-4 sentences. Include the hook, the vibe, and any tone references.
~3-8 min depending on scene count + model
Starting…
Ko tōtou pouaka whakaata pūoro
Ko ngā kōwhiringa hōhonu
Whakamutunga
Kua iti te tohu. Get More Tokens
Want better results? Kāhua Premium (GPT-5, Claude, Gemini) deliver higher quality. View Plans

❤️ E hiahia ana ki te Free.ai? Whakapāpāho ki ōna hoa!

Kaituhi ki kia whiwhi ai i tētahi pātahitanga tohutoro, ā, ka whiwhi i ngā tohu 25,000 ia hoa.

E hiahiatia ana ētahi atu? Ka whakaingoatia te wāteatanga mo ngā tohu 5K / rā + tāpiri 10K
Ka whakaingoatia te pūkete

E whakapāpā ana i tō tātau tono...

Turn an idea into a multi-scene story video with free AI. The director agent writes a script, an AI renders each scene, then stitches them into a final cut.

He pēhea te whakamahi AI Movie Generator

1
Kei roto i tō tou tāuru

Type i te kupu, tuku i tētahi faila, whakaahua rānei i te mea e hiahiatia ana e koe. Kāore he tatau e hiahiatia ana.

2
Ka tirohia te whakatūnga

Ka tukatuka tātau AI i tō tātau tono i roto i ngā wā kotahi mā te whakamahi i ngā tauira pūtake tūwhera pai rawa.

3
Whakahua & tiritiri

Whakataki, tārua, tiritiri rānei i tōna hua. Whakatika noa iho mō te whakamahinga whaiaro, hokohoko rānei.

Ka whakamahia tēnei utauta mā te API

Ka whakamātautau tēnei utauta mai i tōtou waehere. Ko te wāhi mutunga o te REST e ōrite ana ki te OpenAI, te mana tohu-tokona, kāore he SDK tāpiri e hiahiatia ana. Ko ngā utu tohu e ōrite ana ki te whakawhitinga whatunga.

curl -X POST https://api.free.ai/v1/video/generate/ \
  -H "Authorization: Bearer sk-free-..." \
  -H "Content-Type: application/json" \
  -d '{"prompt": "A cat playing piano", "duration": 4}'

AI Movie Generator — FAQ

Type a story idea in plain English. An AI director (Qwen3-30B) writes a multi-scene script, then a video model (CogVideoX or HunyuanVideo) renders each scene as a short clip, and ffmpeg stitches them into one continuous movie. You get a single MP4 of 10-30 seconds covering 2-6 scenes — much closer to a real short film than a one-shot text-to-video clip.

One-shot text-to-video tools (CogVideoX, HunyuanVideo) only handle a single 4-6 second clip — characters and settings drift between separate generations because each clip has no memory of the others. The Movie Generator chains an LLM director on top of those models, plans a coherent scene-by-scene script, and concatenates the output. Better for stories. /video/generate/ is still better for a single shot.

Free up to your daily token pool (5,000 tokens/day for signed-in users, 2,500 for anonymous). A 3-scene movie at CogVideoX quality costs ~15,500 tokens, so it usually needs purchased credits — $5 buys 200K tokens, enough for ~13 movies. Premium HunyuanVideo doubles per-scene cost in exchange for sharper detail.

CogVideoX runs ~60-90 seconds per scene; HunyuanVideo runs ~120-180 seconds per scene. A 3-scene CogVideoX movie typically finishes in 3-5 minutes; a 5-scene HunyuanVideo movie in 12-15 minutes. The progress bar shows live which scene is rendering.

Partially. The director writes consistent character descriptions into every scene prompt ("a young woman with red hair, wearing a green coat"), which keeps faces and outfits in the same family. But the underlying video models do not have true character memory — small drifts in face shape and outfit details are still common. For pixel-perfect consistency you would need IP-Adapter conditioning, which is on our roadmap.

Cinematic warm, anime / Ghibli-inspired, documentary handheld, noir moody, 3D Pixar-style, vintage 35mm, sci-fi cyberpunk, fantasy ethereal. Each style biases the director's scene-prompt language and the video model's rendering toward that look. Custom styles are supported via direct API calls.

Not in the UI yet — V1 generates the script and renders the scenes in one shot. The /v1/video/movie/script/ endpoint (script-only) is on the roadmap for users who want to iterate on the script before burning render time. For now the scene prompts are surfaced in the result page so you can see what was generated.

10-30 seconds total in V1 (2-6 scenes × 4 seconds each). Longer films would chain proportionally more scenes — possible but expensive (~5,000 tokens per scene at CogVideoX, ~10,000 at HunyuanVideo). For 1-minute+ films we recommend rendering individual scenes at /video/generate/ and editing them yourself in DaVinci or CapCut.

V1 outputs silent video. For voiceover, run the script through /voice/tts/ with a narrator voice and add the audio track in DaVinci / CapCut / iMovie. For a soundtrack, /music/generate/ produces royalty-free instrumental tracks. Native synchronized audio generation is on the roadmap (depends on premium models like Veo or Sora-style audio-aware generation).

Yes — every generation creates a one-shot share link at /share/<token>/ that lasts 7 days for paid users (24h for anonymous). The token is unguessable (22-char URL-safe), and the share page renders the video with optional captioning. Same share UX as the rest of the platform.

Runway Gen-3 ($15/month minimum, single-clip text-to-video, gorgeous output but no story planning), Sora (still invite-only, exceptional quality), Veo (Google, premium API only). They produce sharper individual clips. The Movie Generator is the script+stitch layer on top of free open-source models — better when you need a story arc, worse when you need a single perfect 10-second shot. Use both: write the script here, pay for one Sora clip if a specific shot needs to be magazine-quality.

Yes — POST /v1/video/movie/ with {idea, style, num_scenes, video_model}. Returns {output_url, scenes, share_url, tokens}. Heavy queued endpoint, expect 3-15 min depending on scene count + model. Bearer auth via developer keys. See /api/ for full snippets and the /v1/video/movie/status/ progress endpoint.

Ka whakaingoatia te wāteatanga mō ngā tohu 10,000

Ka waihanga tētahi pūkete wātea

Kāore he kāri ā-pūtea e hiahiatia ana

He pēhea te whakawātea i tēnei utauta?

E hiahia ana ki te Free.ai? Whakapāpāho ki ōna hoa!