AI Movie Generator

商业用途OK 380+模型 无水印 不需要注册
型号 :
+ GPT-5, Claude, Gemini
Type a story idea. An AI director writes a multi-scene script, renders each scene as a short video clip, then stitches them into one cohesive cut. Best for ~3-5 scene shorts (10-30 seconds total). Cogvideox is free; hunyuan-video is sharper but takes longer.
2-4 sentences. Include the hook, the vibe, and any tone references.
~3-8 min depending on scene count + model
Starting…
您的音乐视频
高级选项
结果成果成果成果成果成果成果成果成果成果成果
声调越来越低 Get More Tokens
Want better results? 模型 (GPT-5, Claude, Gemini) deliver higher quality. View Plans

❤️ Love this tool? Share it!

< a href=" "/ signup/" style=" "color:#16A34A" > 签名 以获得查询链接, 并获得每个朋友25,000个象征性的 。

还要吗? 每日5K象征性的免费签名 + 10K奖金
签署自由

处理您的请求...

Turn an idea into a multi-scene story video with free AI. The director agent writes a script, an AI renders each scene, then stitches them into a final cut.

如何使用 AI Movie Generator

1
输入输入

Type text, upload a file, or describe what you want. No account needed.

2
点击生成

Our AI processes your request in seconds using the best open-source models.

3
下载共享( S)

下载、复制或分享您的结果。 免费个人和商业使用 。

通过 API 使用此工具

从您自己的代码中自动启用此工具。 OpenAI 兼容的 REST 端点、 Bearer-tok 异常点、 不需要额外的 SDK 。 Token 成本符合网络界面 。

curl -X POST https://api.free.ai/v1/video/generate/ \
  -H "Authorization: Bearer sk-free-..." \
  -H "Content-Type: application/json" \
  -d '{"prompt": "A cat playing piano", "duration": 4}'

AI Movie Generator — FAQ

Type a story idea in plain English. An AI director (Qwen3-30B) writes a multi-scene script, then a video model (CogVideoX or HunyuanVideo) renders each scene as a short clip, and ffmpeg stitches them into one continuous movie. You get a single MP4 of 10-30 seconds covering 2-6 scenes — much closer to a real short film than a one-shot text-to-video clip.

One-shot text-to-video tools (CogVideoX, HunyuanVideo) only handle a single 4-6 second clip — characters and settings drift between separate generations because each clip has no memory of the others. The Movie Generator chains an LLM director on top of those models, plans a coherent scene-by-scene script, and concatenates the output. Better for stories. /video/generate/ is still better for a single shot.

Free up to your daily token pool (5,000 tokens/day for signed-in users, 2,500 for anonymous). A 3-scene movie at CogVideoX quality costs ~15,500 tokens, so it usually needs purchased credits — $5 buys 200K tokens, enough for ~13 movies. Premium HunyuanVideo doubles per-scene cost in exchange for sharper detail.

CogVideoX runs ~60-90 seconds per scene; HunyuanVideo runs ~120-180 seconds per scene. A 3-scene CogVideoX movie typically finishes in 3-5 minutes; a 5-scene HunyuanVideo movie in 12-15 minutes. The progress bar shows live which scene is rendering.

Partially. The director writes consistent character descriptions into every scene prompt ("a young woman with red hair, wearing a green coat"), which keeps faces and outfits in the same family. But the underlying video models do not have true character memory — small drifts in face shape and outfit details are still common. For pixel-perfect consistency you would need IP-Adapter conditioning, which is on our roadmap.

Cinematic warm, anime / Ghibli-inspired, documentary handheld, noir moody, 3D Pixar-style, vintage 35mm, sci-fi cyberpunk, fantasy ethereal. Each style biases the director's scene-prompt language and the video model's rendering toward that look. Custom styles are supported via direct API calls.

Not in the UI yet — V1 generates the script and renders the scenes in one shot. The /v1/video/movie/script/ endpoint (script-only) is on the roadmap for users who want to iterate on the script before burning render time. For now the scene prompts are surfaced in the result page so you can see what was generated.

10-30 seconds total in V1 (2-6 scenes × 4 seconds each). Longer films would chain proportionally more scenes — possible but expensive (~5,000 tokens per scene at CogVideoX, ~10,000 at HunyuanVideo). For 1-minute+ films we recommend rendering individual scenes at /video/generate/ and editing them yourself in DaVinci or CapCut.

V1 outputs silent video. For voiceover, run the script through /voice/tts/ with a narrator voice and add the audio track in DaVinci / CapCut / iMovie. For a soundtrack, /music/generate/ produces royalty-free instrumental tracks. Native synchronized audio generation is on the roadmap (depends on premium models like Veo or Sora-style audio-aware generation).

Yes — every generation creates a one-shot share link at /share/<token>/ that lasts 7 days for paid users (24h for anonymous). The token is unguessable (22-char URL-safe), and the share page renders the video with optional captioning. Same share UX as the rest of the platform.

Runway Gen-3 ($15/month minimum, single-clip text-to-video, gorgeous output but no story planning), Sora (still invite-only, exceptional quality), Veo (Google, premium API only). They produce sharper individual clips. The Movie Generator is the script+stitch layer on top of free open-source models — better when you need a story arc, worse when you need a single perfect 10-second shot. Use both: write the script here, pay for one Sora clip if a specific shot needs to be magazine-quality.

Yes — POST /v1/video/movie/ with {idea, style, num_scenes, video_model}. Returns {output_url, scenes, share_url, tokens}. Heavy queued endpoint, expect 3-15 min depending on scene count + model. Bearer auth via developer keys. See /api/ for full snippets and the /v1/video/movie/status/ progress endpoint.

免费注册一万个符号

创建自由账户

无需信用卡

你会如何评分这个工具?

Love this tool? Share it!