If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.
With the Object Detection feature, you can identify objects of interest in an image or each frame of live video. Each prediction returns a set of objects, each with a label, bounding box, and confidence score.
If you just need to know the contents of an image – not the location of the objects – consider using Image Labeling instead.
The Object Detection feature makes predictions completely on-device. No internet connection is required to interpret images or video through this on-device model. No internet dependency means super-fast performance.
Live Video Performance
Object Detection is designed to run on live video with a fast frame rate. Exact FPS performance varies depending on device, but it should be possible to run this feature on live video on modern mobile devices.
90 Labels from the COCO dataset.