If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.
Ready-to-use APIs. Choose from our ready-to-use computer vision features with ML models baked right in.
Local, private, fast. Every ML model runs directly on-device which means it works offline, runs at high frame-rates, and keeps sensitive data safe.
Cross-platform. Everything we offer is available for both iOS and Android, same SDK, same easy-to-use APIs.
Running models on-device involves working with different resource constraints than running models in the cloud. To help with this, we have created multiple model variants for developers to choose from that best fits their use case.
Optimized for speed. Fast models are best for use in applications that require processing on video streams in real-time or on older devices. As a result, models optimized for speed may be lower resolution than the accurate models.
Models optimized for speed generally use a smaller input size when processing images. This helps reduce the amount of computation needed, but the results may be less precise than models optimized for accuracy.
Optimized for higher accuracy. Accurate models are great for applications running on still images or background video processing where prediction quality is more important than speed.
While Accurate models may run fast enough on the latest iOS devices for a smooth real-time experience, other devices will show significant lag. Consider choosing the Fast variant.
Optimized for size. Small models keep your application bundle size low and conserve bandwidth when downloaded over-the-air. They generally will be slightly less accurate than their larger counterparts.
Small models use weight quantization to help shrink the size of the model. As a result, the model has different performance characteristics than an un-quantized model. Quantized models may not use the GPU.