If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.
Use Pose Estimation to track the position of people in images and video. Build an AI-powered fitness coach, immersive AR experiences, and more.
Detect 17 Body Parts
Coordinates for 17 keypoints and body parts are provided for each skeleton detected.
Our mobile-friendly model was trained on COCO, a large-scale object detection dataset. Predicts objects such as:
- Face: nose, eyes, ears
- Torso: shoulders, elbows, wrists
- Legs: hips, knees, ankles
- View all COCO keypoints
All predictions / model inferences are made completely on-device.
No internet connection is required to interpret images or video.
No internet dependency means super-fast performance.
Live Video Performance (iOS Only)
Runs on live video with a fast frame rate.
Exact FPS performance varies depending on device, but it is possible to run this feature on live video on modern mobile devices.