AI Motion Capture
商业用途OK
380+模型
无水印
不需要注册
型号 :
+ GPT-5, Claude, Gemini
Upload any video with a person in it — AI tracks 33 body keypoints per frame and gives you a skeleton overlay video plus a JSON of joint positions for every frame. No mocap suit, no markers, no calibration. Single-camera markerless motion capture via MediaPipe.
Tracking 33 body joints across every frame…
高级选项
结果成果成果成果成果成果成果成果成果成果成果
声调越来越低
Get More Tokens
❤️ Love this tool? Share it!
< a href=" "/ signup/" style=" "color:#16A34A" > 签名 以获得查询链接, 并获得每个朋友25,000个象征性的 。
还要吗?
每日5K象征性的免费签名 + 10K奖金
签署自由
处理您的请求...
Upload a video, AI extracts 3D body pose per frame using MediaPipe. Get back a skeleton overlay video plus a per-frame keypoints JSON for animation, sports analysis, or biomechanics. Free, no markers, no mocap suit.
如何使用 AI Motion Capture
1
输入输入
键入文本、上传文件或描述您想要的东西。不需要账户 。
2
点击生成
我们的人工智能使用最佳的开放源码模型,在秒内处理你的请求。
3
下载共享( S)
下载、复制或分享您的结果。 免费个人和商业使用 。
AI Motion Capture — FAQ
Drop in a video with a person in frame and the AI tracks 33 body joints — head, shoulders, elbows, wrists, hips, knees, ankles, plus hands and feet — in every frame. You get back a skeleton overlay video plus a JSON file with the per-frame joint coordinates. No mocap suit, no markers, no calibration step, single camera works fine.
Drive 3D character animations in Blender / Unity / Unreal (re-target to a rigged armature), do sports / dance / martial-arts technique analysis, build form-correction overlays, train ML models on movement data, or just visualize movement patterns over time.
MediaPipe Pose Landmarker (Google, Apache 2.0). It outputs 33 body keypoints per frame in normalized 2D coords + an estimated Z (relative depth from the camera) + per-keypoint visibility scores. It runs entirely on CPU so the GPU stays free for your other generations.
It's 2.5D — true 2D + estimated relative Z from a single camera. Real 3D motion capture needs multiple synchronized cameras for triangulation (the FreeMoCap / OptiTrack / Vicon approach). For TikTok dances, sports analysis, animation reference, or any single-camera workflow, MediaPipe's output is excellent. We'll add a multi-camera tool later for users who need true 3D.
200 tokens per second of input video (floored at 500 tokens). A 10-second clip costs 2,000 tokens; a 60-second clip costs 12,000. Daily-pool free tokens cover a few short clips per day; signed-in users get 5K/day.
Real-time-ish: roughly 30-50 frames per second on our box. A 1-minute 30 fps video processes in 30-60 seconds end-to-end including upload + render. Longer videos take proportionally longer.
MP4, MOV, WebM, AVI, MKV, and most common video formats — anything ffmpeg can decode. Max upload 100 MB. Resolution doesn't matter much; the pose model internally downsamples for speed.
The MediaPipe Pose Landmarker tracks ONE person per frame (the most prominent one). For multi-person tracking we'd need a different model (RTMPose, YOLOv8-pose). If your use case is multi-person, file an idea via /contact/ — happy to add it as a separate tool.
Visible joints are tracked to within ~5-10 px on a 720p frame; occluded joints (hand behind back, foot off-frame) get filled in with low visibility scores so you can filter them out. Smoothing across frames in your downstream pipeline (Kalman / Savitzky-Golay) cleans up the rest.
Not directly from AI Motion Capture today — the JSON is the raw keypoint data. You can convert offline using libraries like `aniposelib` or `pose-format`. We're considering shipping a "Mocap → BVH/FBX" follow-on tool — file an upvote at /contact/ if you want it.
Processed immediately, the keypoints are extracted, then the input video is deleted. The skeleton-overlay output and JSON are kept for the standard share-link expiry (24 h anonymous / 7 d paid). Never used for training. /privacy/ for the full policy.
Yes — POST a multipart `video` file to /v1/video/motion-capture/. Returns {video_url, json_url, duration_s, tokens, share_url}. Bearer auth (sk-free-…) gives you 10,000 tokens/month free. Curl example at /api/.
你会如何评分这个工具?