Frequently asked questions and troubleshooting for Fritz AI. If you don’t find what you’re looking for, get in touch through our Help Center.
My app won’t build because bundle identifier / application ID does not match API Key.¶
When setting up the Fritz SDK, you should have created a new iOS or Android application with the webapp and specified a package identifer. This identifier must match the bundle id (on iOS) and applicationID (on Android) that you build your project with.
If you’re unsure which package id you entered into Fritz, you can find it by logging into the webapp, selecting Project Settings from the left navigation menu, and inspecting the Package Name column. If there is an error in your package name, you can delete your app with the drop down menu on the right and create a new one.
Does Fritz Vision work with images from my camera roll, an SD card, or a live camera preview?¶
Yes. Fritz Vision for iOS and Android is designed to work with any image whether it comes from a saved photo in a user’s camera roll, a storage device like an SD card, or a live preview from a built-in camera.
Even if you don’t see a tutorial or example project using the exact image type you had in mind, chances are Fritz Vision will stil work for your use case.
Can I use pose estimation to track people in 3D space?¶
Why is my model slow?¶
Neural networks are computationally intensive and older devices may not be powerful enough to provide completely smooth experiences. However, if your model is running slower than you think it should, there are a few things to check. When running prediction on an image, the image is processed in order to prepare it for the model input. Larger images can take longer to process so double check the size of the image you’re passing into the
predictmethod. To get the best performance for video, make sure the preview size for each frame is set to a lower resolution (we recommend setting the preview size to 480x640 for the camera input).
For Android, run your app in debug mode and check the for “Image Processing Time(ms)” and “Inference Time” in your logs.
- Image Processing Time(ms) - The time to prepare your image for the model
- Inference Time - The time for the model to run
Does Fritz require an internet connection?¶
No. All models run directly on-device and can be used without an internet connection. The Fritz SDK collects runtime performance information that is batched and periodically sent to the cloud, but transfers are typically very small and a lack of connectivity will not stop models from working.
cgImage on a mask is
UIImage s are backed by a
ciImage. This helps you decide when and how to render results from the
Swift version compiler errors.¶
If you’re trying to build your Xcode project and are getting an error like:Module compiled with Swift 5.0.1 cannot be imported by the Swift 5.1 compile
This error means that you’re currently trying to build your app using a different version of Swift than the Fritz SDK was compiled with. This frequently happens when new versions of iOS and Xcode are released. Make sure you’re using the latest version of Xcode and targeting the proper Swift compiler version.
Starting with version 4.1.0, the Fritz SDK for iOS is fully compatible with Xcode 11, iOS 13, and should target version 5.1 of Swift.
How do I update a segmentation mask manually to fix errors or mask additional content?¶
Occasionally the segmentation result may not provide a completely accurate mask or you may want to allow a user to update the mask manually. You can achieve this by modifying the alpha mask directly.
- From the
SegmentationResult, export the alpha mask to a bitmap:
- Add or erase the desired pixels to the alpha mask by set the alpha to 0 if you want to remove it, 1 if you want to add the pixel. You’ll have to implement this yourself.
- Use the alpha mask to cut out from the original image:
Bitmap finalResult = visionImage.mask(yourModifiedBitmap);
How do I use GPU-enabled models?¶
To enable the model to run on the GPU, you must explicitly set it in the predictor options.FritzVisionSegmentationPredictorOptions options = new FritzVisionSegmentationPredictorOptions(); options.useGPU = true; SegmentationOnDeviceModel onDeviceModel = FritzVisionModels.getPetSegmentationOnDeviceModel(ModelVariant.FAST); FritzVisionSegmentationPredictor predictor = FritzVision.ImageSegmentation.getPredictor(onDeviceModel, options);
In order for the model to run, the following conditions must be met:
- The predictor must be initialized in the same thread as it is called. For example, calls to
predictor.predict(visionImage)must be called in the same thread.
- The OpenGL context must be initialized before trying to load a GPU in a predictor.
Currently running models on the GPU is not supported with OTA model updates.
Still have a question? Get in touch through our Help Center.