Image Segmentation

Image Segmentation allows developers to partition a video or image into multiple segments that represent everyday things. As an example, image segmentation can help identify the outline of people walking in the street or discern the shapes of everyday things in your living room like couches and chairs.

Custom Training Models for Image Segmentation

You can train a custom model that is compatible with the Image Segmentation API by using Studio.

Pre-Trained Models

Include our models directly in your app and use them with the API.

NameExampleDescriptionClassesMask Resolution
PeopleA model that detects people masksperson384x384
PetA model that detects pet masks.pet224x224
HairA model that detects hair masks.hair224x224
SkyA model that detects sky masks.sky224x224
OutdoorA model that detects object masks in an outdoor scene.Building, Sky, Tree, Sidewalk, Ground, Car, Water, House, Fence, Fencing, Signboard, Sign, Skyscraper, Bridge, Span, River, Bus, Truck, Van, Motorbike, Bicycle, Traffic Light, Person384x384
Living RoomA model that detects object masks in a living room scene.Chair, Wall, Coffee Table, Ceiling, Floor, Bed, Lamp, Sofa, Window, Pillow, 384x384

Technical Specifications

ArchitectureFormat(s)Model SizeInputOutputBenchmarks
MobileNet and ICNet variantsCore ML (iOS), TensorFlow Lite (Android)~25 MB224x224-pixel imageHeight and width of the mask, Number of classes the model predicts, Probability that pixel belongs to class30 FPS on iPhone X, 10 FPS on Pixel 2

iOS

You can use a Fritz Image Segmentation Model to partition an image into multiple segments that recognize everyday objects. These instructions will help you get Image Segmentation running in your app in no time.

1. Build the Segmentation Model

To create the object model, you can either include the model in your bundle or download it over the air once the user installs your app.

Include Pre-Trained Image Segmentation Models (Optional)

Choose between fast, accurate, or small variants. Model variants make sure you get the right model for your use case. For more information on the tradeoffs of each variant, see Choosing a Model Variant.

// For the fast variant
pod 'Fritz/VisionSegmentationModel/People/Fast'
// For accurate variant
pod 'Fritz/VisionSegmentationModel/People/Accurate'
// For small variant
pod 'Fritz/VisionSegmentationModel/People/Small'
Make sure to install the added pod
# Update the podspec repo with the latest version of the Fritz SDK
pod repo update
pod install

If you've built the app with just the core Fritz pod and add a new submodule for the model, you may encounter an error "Cannot invoke initializer for type". To fix this, run a pod update and clean your XCode build to resolve the issue.

Define People Segmentation Model

Choose which segmentation model you want to use. There should only be one instance of the model that is shared across all predictions. Here is an example using the FritzVisionPeopleSegmentationModelFast:

import Fritz
let peopleModel = FritzVisionPeopleSegmentationModelFast()
Model initialization

It's important to intialize one instance of the model so you are not loading the entire model into memory on each model execution. Usually this is a property on a ViewController. When loading the model in a ViewController, the following ways are recommended:

Lazy-load the model

By lazy-loading model, you won't load the model until the first prediction. This has the benefit of not prematurely loading the model, but it may make the first prediction take slghtly longer.

class MyViewController: UIViewController {
lazy var model = FritzVisionHumanPoseModelFast()
}

Load model in viewDidLoad

By loading the model in viewDidLoad, you'll ensure that you're not loading the model before the view controller is loaded. The model will be ready to go for the first prediction.

class MyViewController: UIViewController {
let model: FritzVisionHumanPoseModelFast!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
model = FritzVisionHumanPoseModelFast()
}
}

Alternatively, you can initialize the model property directly. However, if the ViewController is instantiated by a Storyboard and is the Initial View Controller, the properties will be initialized before the appDelegate function is called. This can cause the app to crash if the model is loaded before FritzCore.configure() is called.

Download the model over the air

Only available on Growth plans

For more information on plans and pricing, visit our website.

Include Fritz/Vision in your Podfile.

pod 'Fritz/Vision'

Make sure to run a pod install with the latest changes.

pod install

Add code to download the model

import Fritz
var peopleModel: FritzVisionPeopleSegmentationModelFast?
FritzVisionPeopleSegmentationModelFast.fetchModel { model, error in
guard let downloadedModel = model, error == nil else { return }
peopleModel = downloadedModel
}

2. Create FritzVisionImage

FritzVisionImage supports different image formats.

Using a CMSampleBuffer

If you are using a CMSampleBuffer from the built-in camera, first create the FritzVisionImage instance:

let image = FritzVisionImage(buffer: sampleBuffer)
FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithBuffer: sampleBuffer];
// or
FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithImage: uiImage];

The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image. By default, if you specify FritzVisionImageMetadata the orientation will be .right:

image.metadata = FritzVisionImageMetadata()
image.metadata?.orientation = .left
// Add metdata
visionImage.metadata = [FritzVisionImageMetadata new];
visionImage.metadata.orientation = FritzImageOrientationLeft;
Setting the Orientation from the Camera

Data passed in from the camera will generally need the orientation set. When using a CMSampleBuffer to create a FritzVisionImage the orientation will change depending on which camera and device orientation you are using.

When using the back camera in the portrait Device Orientation, the orientation should be .right (the default if you specify FritzVisionImageMetadata on the image). When using the front facing camera in portrait Device Orientation, the orientation should be .left.

You can initialize the FritzImageOrientation with the AVCaptureConnection to infer orientation (if the Device Orientation is portrait):

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let image = FritzVisionImage(sampleBuffer: sampleBuffer, connection: connection)
...
}

Using an UIImage

If you are using an UIImage, create the FritzVisionImage instance:

let image = FritzVisionImage(image: uiImage)

The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image:

image.metadata = FritzVisionImageMetadata()
image.metadata?.orientation = .right
Set the image orientation

UIImage can have associated UIImageOrientation data (for example when capturing a photo from the camera). To make sure the model is correctly handling the orientation data, initialize the FritzImageOrientation with the image's image orientation:

image.metadata?.orientation = FritzImageOrientation(image.imageOrientation)

3. Run image segmentation model

Run image segmentation on input image:

guard let result = try? peopleModel.predict(image) else { return }

Configure Image Segmentation Model

Before running image segmentation, you can configure the prediction with a FritzVisionSegmentationModelOptions object.

Settings

OptionDescription
imageCropAndScaleOption.scaleFit (default)
Crop and Scale option for how to resize and crop the image for the model

For example, you can build a model that will crop the image to the input size (384x384):

let options = FritzVisionSegmentationModelOptions()
options.imageCropAndScaleOption = .centerCrop
guard let result = try? peopleModel.predict(image, options: options) else { return }

4. Create masks from image segmentation result

Create image with all identified classes

let result: FritzVisionSegmentationResult = // ... Result from prediction above
let peopleMask = result.buildMultiClassMask()

Pass in a minimum threshold to only display pixels with a confidence score at least as high as the withMinimumAcceptedScore

let result: FritzVisionSegmentationResult = // ... Result from prediction above
let peopleMask = result.buildMultiClassMask(withMinimumAcceptedScore: 0.7)

Mask a specific class

You can create a mask for a specific ModelSegmentationClass:

let result: FritzVisionSegmentationResult = // ... Result from prediction above
let peopleMask = result.buildSingleClassMask(forClass: FritzVisionPeopleClass.person)

Additionally, you can call buildSingleClassMask with clippingScoresAbove and zeroingScoresBelow arguments as well. This is helpful for dealing with the uncertainty of the model.

  • When clippingScoresAbove is set, any pixels with a confidence score above that threshold will have an alpha value of 255 (completely opaque).
  • When zeroingScoresBelow is set, any confidence scores below will not appear in the mask.
  • Any scores between clippingScoresAbove and zeroingScoresBelow will have a value of classProbability * 255. It's useful to create a blur around predictions that may still contain the desired class.
let result: FritzVisionSegmentationResult = // ... Result from prediction above
let peopleMask = result.buildSingleClassMask(
forClass: FritzVisionPeopleClass.person,
clippingScoresAbove: 0.7,
zeroingScoresBelow: 0.3)

5. Display the Mask

Blur the edges of the mask

For fainter edges, you can add a blur to the mask.

Blur the mask edges

Left: Original Image | Middle: Blurred mask | Right: Mask overlay

guard let mask = result.buildSingleClassMask(
forClass: FritzVisionHairSegmentationClass.hair,
maxAlpha: 180,
color: .cyan
blurRadius: 15) else { return }

You can specify a radius for the blur. In this example, the blur has a radius of 15. The default radius is 0, which results in no blur.

Cut out the mask from the original image

To create a cut out from the original image using the mask, use the masked method on the FritzVisionImage object.

Create a cut out mask

Left: Original Image | Middle: Pet Segmentation mask | Right: Mask cut out

let image = FritzVisionImage(...) // Image from above
let peopleMask = result.buildSingleClassMask(...) // Mask from above
let clippedMaskImage = image.masked(withAlphaMask: peopleMask)

Blend the mask color with the original image:

To change hair color with Hair Segmentation, you can blend the masked color with the original image.

Blend the mask colors

Left: Original Image | Middle: Hair Segmentation mask | Right: Blended bitmap with the mask

guard let mask = result.buildSingleClassMask(
forClass: FritzVisionHairSegmentationClass.hair,
clippingScoresAbove: 0.6,
zeroingScoresBelow: 0.2,
resize: false,
color: .red) else { return }
let blended = image.blend(
withMask: mask,
blendKernel: .softLight,
opacity: 0.7
)

Here you can change the sensitivity of the hair mask. In this example, any pixels with a confidence score above 0.6 will be completely opaque (alpha of 255) in the resulting mask. Values below 0.2 will completely transparent.

Customize

Imagine you are training an animal segmentation model on 4 different classes: None (matching no ani mals), Bird, Dog, and Horse. In your trained model, they correspond indices to 0, 1, 2, 3 respectively.

After you've finished training your model, follow these steps to add it to your iOS app.

1. Create a custom model for your trained model in the webapp and add to your Xcode project.

For instructions on how to do this, see :ref:Integrating a Custom Core ML Model<core_ml_ref>.

2. Conform your model

For the following instructions, assume your model is named CustomAnimalSegmentationModel.mlmodel. Create a new file called :code:CustomAnimalSegmentationModel+Fritz.swift and conform your class like so:

import Fritz
extension CustomAnimalSegmentationModel: SwiftIdentifiedModel {
static let modelIdentifier = "model-id-abcde"
static let packagedModelVersion = 1
}

3. Define the AnimalSegmentation Model

First define the classes used in your model. We recommend you create a class to hold all individual classes. Each ModelSegmentationClass is created with a name and a color used for the mask.

import Fritz
public class AnimalClass: NSObject {
public static let none = ModelSegmentationClass(label: "None", index: 0, color: (0, 0, 0, 0))
public static let bird = ModelSegmentationClass(label: "Bird", index: 1, color: (0, 0, 0, 255))
public static let dog = ModelSegmentationClass(label: "Dog", index: 2, color: (0, 0, 128, 255))
public static let horse = ModelSegmentationClass(label: "Horse", index: 3, color: (230, 25, 75, 255))
public static let allClasses: [ModelSegmentationClass] = [.none, .bird, .dog, .horse]
}

Next, create the image segmentation model:

let animalSegmentationModel = FritzVisionSegmentationPredictor(model: CustomAnimalSegmentationModel(),
name: "AnimalSegmentationModel",
classes: AnimalClass.allClasses)

4. Run prediction

Follow steps 3. Run image segmentation model and 4. Create masks from image segmentation result to use your custom segmentation model.

5. Use the record method on the predictor to collect data

The FritzVisionSegmentationPredictor used to make predictions has a record method allowing you to send an image, a model-predicted annotation, and a user-generated annotation back to your Fritz AI account.

note

Thus far, the term Segmentation Mask has been used to refer to the various UIImage objects used for processing images and creating filters such as color overlays or background removal.

Here, Segmentation Mask refers to data denoting whether or not an individual pixel belongs to a particular class. The mask objects are arrays of 0 or 1 values and not images.

guard let results = try? segmentationModel.predict(image, options: options),
// For predicted segmentations, confidenceThreshold and areaThreshold values
// can be used to control which pixels are included in each segmentation mask.
let predictedSegmentations = results.segmentationMasks(confidenceThreshold: 0.5, areaThreshold: 0.1)
// Implement your own custom UX for users to annotation segmentation masks.
segmentationModel.record(image, predicted: predictedSegmentations, modified: modifiedMasks)

Additional Resources

Want more resources to get started? Check out the following:


Android

Fritz AI provides an Android API that you can use to partition an image into multiple segments that recognize everyday objects. Follow these simple instructions in order to bring image segmentation to your app in no time.

1. Add the dependencies via Gradle

Add our repository in order to download the Vision API:

repositories {
maven { url "https://fritz.mycloudrepo.io/public/repositories/android" }
}

Add renderscript support and include the vision dependency in app/build.gradle. Renderscript is used in order to improve image processing performance. You'll also need to specify aaptOptions in order to prevent compressing TensorFlow Lite models.

android {
defaultConfig {
renderscriptTargetApi 21
renderscriptSupportModeEnabled true
}
// Don't compress included TensorFlow Lite models on build.
aaptOptions {
noCompress "tflite"
}
}
dependencies {
implementation 'ai.fritz:vision:+'
}

(Optional include model in your app) To include an Image Segmentation model with your build, then you'll need to add the dependency as shown below. Note: This includes the model with your app when you publish it to the play store and will increase your app size.

To identify people (segments that represent people are marked in Cyan).

Fast

  • Resolution: 384x384
  • Model Size: 26.8MB
  • Model Speed: 250ms (on Pixel)
dependencies {
implementation 'ai.fritz:vision-people-segmentation-model-fast:{ANDROID_MODEL_VERSION}'
}

Accurate

  • Resolution: 768x768
  • Model Size: 26.8MB
  • Model Speed: 700ms (on Pixel)
dependencies {
implementation 'ai.fritz:vision-people-segmentation-model-accurate:{ANDROID_MODEL_VERSION}'
}

Small

  • Resolution: 224x224
  • Model Size: 4MB
  • Model Speed: 250ms (on Pixel)
dependencies {
implementation 'ai.fritz:vision-people-segmentation-model-small:{ANDROID_MODEL_VERSION}'
}
note

Behind the scenes, People Segmentation uses a TensorFlow Lite model. In order to include this with your app, you'll need to make sure that the model is not compressed in the APK by setting aaptOptions.

Now you're ready to segment images with the Image Segmentation API.

2. Get a Segmentation Predictor

In order to use the predictor, In order to use the predictor, the on-device model must first be loaded.

Pre-trained Models

There are 3 different model variants for each segmentation model: fast, accurate, small.

If you followed the Optional step above and included the model, you can get a predictor to use immediately:

// For fast
SegmentationOnDeviceModel onDeviceModel = FritzVisionModels.getPeopleSegmentationOnDeviceModel(ModelVariant.FAST);
FritzVisionSegmentationPredictor predictor = FritzVision.ImageSegmentation.getPredictor(onDeviceModel);

If you did not include the on-device model, you'll have to load the model before you can get a predictor. To do that, you'll call FritzVision.ImageSegmentation.loadPredictor to start the model download.

FritzVisionSegmentationPredictor predictor;
SegmentationManagedModel managedModel = FritzVisionModels.getPeopleSegmentationManagedModel(ModelVariant.FAST);
FritzVision.ImageSegmentation.loadPredictor(managedModel, new PredictorStatusListener<FritzVisionSegmentationPredictor>() {
@Override
public void onPredictorReady(FritzVisionSegmentationPredictor segmentationPredictor) {
Log.d(TAG, "Segment predictor is ready");
predictor = segmentationPredictor;
}
});
note

For other image segmentation models, the FritzVisionModels method to fetch a pretrained model is defined with the following convention:

"get<ModelType>SegmentationOnDeviceModel(ModelVariant modelVariant)" "get<ModelType>SegmentationManagedModel(ModelVariant modelVariant)"

e.g FritzVisionModels.getSkySegmentationOnDeviceModel(ModelVariant.ACCURATE); e.g FritzVisionModels.getSkySegmentationManagedModel(ModelVariant.ACCURATE);

  • Model Types = Hair, People, LivingRoom, Outdoor, Pet, Sky
  • Model Variants = ModelVariant.SMALL, ModelVariant.ACCURATE, ModelVariant.FAST

Custom Models

If you've trained a custom model using Fritz AI Studio, you'll need to construct a SegmentationOnDeviceModel.

This requires your model file, model ID, and information about the classes your model predicts.

MaskClass[] maskClasses = {
// The first class must be "None"
new MaskClass("None", Color.TRANSPARENT),
new MaskClass("Class 1", Color.RED),
new MaskClass("Class 2", Color.BLUE),
// ...
};
SegmentationOnDeviceModel onDeviceModel = new SegmentationOnDeviceModel(
"file:///android_asset/YourModelFile.tflite",
"your-model-id",
yourModelVersionNumber,
maskClasses
);

3. Create a FritzVisionImage from an image or a video stream

To create a FritzVisionImage from a Bitmap:

FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap);

To create a FritzVisionImage from a media.Image object when capturing the result from a camera, first determine the orientation of the image. This will rotate the image to account for device rotation and the orientation of the camera sensor.

// Get the system service for the camera manager
final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
// Gets the first camera id
String cameraId = manager.getCameraIdList().get(0);
// Determine the rotation on the FritzVisionImage from the camera orientaion and the device rotation.
// "this" refers to the calling Context (Application, Activity, etc)
ImageRotation imageRotationFromCamera = FritzVisionOrientation.getImageRotationFromCamera(this, cameraId);

Finally, create the FritzVisionImage object with the rotation

FritzVisionImage visionImage = FritzVisionImage.fromMediaImage(image, imageRotationFromCamera);

4. Run prediction on FritzVisionImage

Pass your FritzVisionImage into the predictor to create masks on the original image.

FritzVisionSegmentationResult segmentationResult = predictor.predict(visionImage);

Running predict on the image returns a FritzVisionSegmentationResult object with the following methods.

FritzVisionSegmentationResult methods

MethoodDescription
float[][] getConfidenceScores()Get the raw confidence scores of the output. This matrix will be the same size as the model output.
MaskClass[][] getMaskClassifications()Gets a grid of MaskClass objects that represent the output classification of the model (e.g MaskClass.PERSON, MaskClass.BUS, MaskClass.None). The matrix is the same size as the model output.
Bitmap buildMultiClassMask()Create a mask of the classes detected. The resulting bitmap size will be the same size as the model output.
Bitmap buildMultiClassMask(int maxAlpha, float clippingScoresAbove, float zeroingScoresBelow)Create a mask of the classes detected. The resulting bitmap size will be the same size as the model output.
Bitmap buildMultiClassMask(int maxAlpha, float clippingScoresAbove, float zeroingScoresBelow, float blurRadius)Create a mask of the classes detected using the given options.
Bitmap buildSingleClassMask(MaskClass maskClass)Create a mask of the classes detected using the given options. Blurs the edges of the mask.
Bitmap buildSingleClassMask(MaskClass maskClass, int maxAlpha, float clippingScoresAbove, float zeroingScoresBelow)Create a mask for a given MaskClass.
Bitmap buildSingleClassMask(MaskClass maskClass, int maxAlpha, float clippingScoresAbove, float zeroingScoresBelow, int maskColor)Create a mask for a given MaskClass using the given options. Overrides the color for the MaskClass.
Bitmap buildSingleClassMask(MaskClass maskClass, int maxAlpha, float clippingScoresAbove, float zeroingScoresBelow, int maskColor, float blurRadius)Create a mask for a given MaskClass using the given options. Overrides the color for the MaskClass. Blurs the edges of the mask.

Calling buildSingleClassMask with clippingScoresAbove and zeroingScoresBelow arguments helps for dealing with the uncertainty of the model.

  • When clippingScoresAbove is set, any pixels with a confidence score above that threshold will have an alpha value of 255 (completely opaque).
  • When zeroingScoresBelow is set, any confidence scores below will not appear in the mask.
  • Any scores between clippingScoresAbove and zeroingScoresBelow will have a value of classProbability * 255. It’s useful to create a blur around predictions that may still contain the desired class.

5. Displaying the result

Overlay the mask on top of the original image

To view the mask result, use the overlay method on the visionImage passed into the predict method.

Overlay the maskLeft: Original Image | Middle: People Segmentation mask | Right: Mask overlay
// Create a mask
Bitmap personMask = segmentationResult.buildSingleClassMask(MaskClass.PERSON);
Bitmap imageWithMask = visionImage.overlay(personMask);

Blur the edges of the mask

For fainter edges, you can add a blur to the mask. You can specify a radius for the blur (0.0-25.0).

Blur the mask edges

Left: Original Image | Middle: Blurred mask | Right: Mask overlay

// Create a mask (maxAlpha 180, clippingScoresAbove 1f, zeroingScoresBelow .3f, maskColor Color.CYAN, blurRadius 15f)
Bitmap personMask = segmentationResult.buildSingleClassMask(MaskClass.PERSON, 180, 1f, .3f, Color.CYAN, 15f);
Bitmap imageWithMask = visionImage.overlay(personMask);

Cut out the mask from the original image

To create a cut out from the original image using the mask, use the mask method on the visionImage passed into the predict method.

Create a cut out mask

Left: Original Image | Middle: Pet Segmentation mask | Right: Mask cut out

// Create a mask (maxAlpha 255, clippingScoresAbove .5f, zeroingScoresBelow .5f)
Bitmap personMask = segmentationResult.buildSingleClassMask(MaskClass.PERSON, 255, .5f, .5f);
// This image will have the same dimensions as visionImage
Bitmap imageWithMask = visionImage.mask(personMask);
// To trim the transparent pixels, set the optional trim parameter to true
Bitmap imageWithMask = visionImage.mask(personMask, true);

Blend the mask color with the original image:

For use cases such as hair color changing with Hair Segmentation, you can blending the mask color with the original image. Choose from one of the following BlendModes. You may specify an alpha to apply to the mask before blending (0-255).

Blend the mask colors

Left: Original Image | Middle: Hair Segmentation mask | Right: Blended bitmap with the mask

// Hue Blend
BlendMode hueBlend = BlendMode.HUE;
// Color Blend
BlendMode colorBlend = BlendMode.COLOR;
// Soft Light Blend
BlendMode softLightBlend = BlendMode.SOFT_LIGHT;

Set the color for the mask and then create a blended bitmap from the result.

FritzVisionSegmentationResult hairResult = hairPredictor.predict(fritzVisionImage);
Bitmap maskBitmap = hairResult.buildSingleClassMask(MaskClass.HAIR, 180, .5f, .5f, Color.BLUE);
Bitmap blendedBitmap = visionImage.blend(maskBitmap, blendMode);

The blendedBitmap object will have the same size as visionImage.

6. Use the record method on the predictor to collect data

The FritzVisionSegmentationPredictor used to make predictions has a record method allowing you to send an image, a model-predicted annotation, and a user-generated annotation back to your Fritz AI account.

FritzVisionSegmentationResult predictedResults = visionPredictor.predict(visionImage);
// Implement your own custom UX for users to annotate an image and store
// that as a FritzVisionSegmentationResult.
visionPredictor.record(visionImage, predictedResults.toAnnotations(), modifiedResults.toAnnotations());
// Optionally, you can use confidenceThreshold and areaThreshold to create annotations
// only when the model is confident enough and the objects are large enough.
visionPredictor.record(image, predictedResults.toAnnotations(0.5f, 0.1f), null);

Advanced Options

Configuring the Predictor

You can configure the predictor with FritzVisionSegmentationPredictorOptions to return specific results that match the options given:

FritzVisionSegmentationPredictorOptions methods

OptionDefaultDescription
confidenceThreshold.3Return labels above the confidence threshold
targetClassesallDefined by the segmentation model", "The set of classes the model will look for

In order to change model performance for different devices, you may also expose the underlying TensorFlow Lite Interpreter options.

FritzVisionPredictorOptions methods

OptionDefaultDescription
useGPUfalseReturn labels above the confidence threshold. Please note, this is an experimental option and should not be used in production apps.
useNNAPIfalseUses the NNAPI for running model inference. Please note, this is an experimental option and should not be used in production apps.
numThreads2For CPU Only, run model inference using the specified number of threads

For more details, please visit the Official TensorFlow Lite documentation.

Example:

To target only specific classes (e.g Window / Walls), create a FritzVisionSegmentationPredictorOptions object to pass into the predictor.

To initialize the options when getting the style predictor.

// List the segments to target
List<MaskClass> targetMasks = new ArrayList<>();
targetMasks.add(MaskClass.WALL);
targetMasks.add(MaskClass.WINDOW);
// Create predictor options with a confidence threshold.
// If it's below the confidence threshold, the segment will be marked
// as MaskClass.NONE.
FritzVisionSegmentationPredictorOptions options = new FritzVisionSegmentationPredictorOptions();
options.targetClasses = targetMasks;
options.confidenceThreshold = .3f;
predictor = FritzVision.ImageSegmentation.getPredictor(onDeviceModel, options);

The resulting list will contain 3 classes: Wall, Window, and None.