Android

Fritz provides an Android API that you can use to partition an image into multiple segments that recognize everyday objects. Follow these simple instructions in order to bring image segmentation to your app in no time.

Note

If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature or custom model libraries.

1. Add FritzVisionSegmentationModel via Gradle

Assuming that you’ve already included the Core SDK and the fritz repository, you’ll also want to include the image segmentation dependency. You may choose from the following options:

In app/build.gradle:

To idenfity people (segments that represent people are marked in Cyan)

dependencies {
    implementation 'ai.fritz:core:1.3.2'
    implementation 'ai.fritz:vision-people-segmentation:1.3.2'
}

To idenfity the following object in your living room (with the colors that represent each segment in the final result):

  • Chair (Sandy Brown)
  • Wall (White)
  • Coffee Table (Brown)
  • Ceiling (Light Gray)
  • Floor (Dark Gray)
  • Bed (Light Blue)
  • Lamp (Yellow)
  • Sofa (Red)
  • Window (Cyan)
  • Pillow (Beige)
dependencies {
    implementation 'ai.fritz:core:1.3.2'
    implementation 'ai.fritz:vision-living-room-segmentation:1.3.2'
}

To idenfity the following objects outside (with the colors that represent each segment in the final result):

  • Building / Edifice (Gray)
  • Sky (Very Light Blue)
  • Tree (Green)
  • Sidewalk / Pavement (Dark Gray)
  • Earth / Ground (Dark Green)
  • Car (Light Orange)
  • Water (Blue)
  • House (Purple)
  • Fence, Fencing (White)
  • Signboard, Sign (Pink)
  • Skyscraper (Light Gray)
  • Bridge, Span (Orange)
  • River (Light Blue)
  • Bus (Dark Orange)
  • Truck / Motortruck (dark brown)
  • Van (Light Orange)
  • Minibike / Motorbike (Black)
  • Bicycle (Dark Blue)
  • Traffic Light (Yellow)
  • Person (Cyan)
dependencies {
    implementation 'ai.fritz:core:1.3.2'
    implementation 'ai.fritz:vision-outdoor-segmentation:1.3.2'
}

2. Get a Segmentation Predictor

Depending on the library you’ve chosen above, you’ll want to pass in your model.

For the People Segment Library:

// "this" refers to the calling Context (Application, Activity, etc)
FritzVisionSegmentPredictor segmentPredictor = new FritzVisionPeopleSegmentPredictor(this);

For the Living Room Segment Library:

// "this" refers to the calling Context (Application, Activity, etc)
FritzVisionSegmentPredictor segmentPredictor = new FritzVisionLivingRoomSegmentPredictor(this);

For the Outdoor Segment Library:

// "this" refers to the calling Context (Application, Activity, etc)
FritzVisionSegmentPredictor segmentPredictor = new FritzVisionOutdoorSegmentPredictor(this);

3. Create a FritzVisionImage from an image or a video stream

  • To create a FritzVisionImage from a Bitmap:

    FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap);
    
  • To create a FritzVisionImage from a media.Image object when capturing the result from a camera, first determine the orientation of the image. This will rotate the image to account for device rotation and the orientation of the camera sensor.

    // Get the system service for the camera manager
    final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
    
    // Gets the first camera id
    String cameraId = manager.getCameraIdList().get(0);
    
    // Determine the rotation on the FritzVisionImage from the camera orientaion and the device rotation.
    // "this" refers to the calling Context (Application, Activity, etc)
    int imageRotationFromCamera = FritzVisionOrientation.getImageRotationFromCamera(this, cameraId);
    

    Finally, create the FritzVisionImage object with the rotation

    FritzVisionImage visionImage = FritzVisionImage.fromMediaImage(image, imageRotationFromCamera);
    

4. Run prediction on FritzVisionImage

Pass your FritzVisionImage into the predictor to segment the image. Running predict on the image will result in a list of FritzVisionSegment objects. The FritzVisionSegment object contains the location of a segment or section on the original image and the predicted classification of the segment.

List<FritzVisionSegment> segments = segmentPredictor.predict(visionImage);

Overlay the list of FritzVisionSegment objects on a Canvas to see the output.

Canvas canvas = new Canvas();

// Draw the original image to the canvas
FritzVisionImage.drawOnCanvas(visionImage.getBitmap(), canvas);

// Draw each segment to the canvas using default colors
for(FritzVisionSegment segment : segments) {
    segment.drawOnCanvas(canvas);
}

Note

To target only specific classes (e.g Window / Walls), create a FritzVisionSegmentPredictorOptions object to pass into the predictor.

To initialize the options when getting the style predictor.

// List the segments to target
List<SegmentClass> targetSegmentClasses = new ArrayList<>();
targetSegmentClasses.add(SegmentClass.WALL);
targetSegmentClasses.add(SegmentClass.WINDOW);

// Create predictor options with a confidence threshold.
// If it's below the confidence threshold, the segment will be marked
// as SegmentClass.NONE.
FritzVisionSegmentPredictorOptions options = new FritzVisionSegmentPredictorOptions.Builder()
        .targetSegmentClasses(targetSegmentClasses)
        .targetConfidenceThreshold(.3f)
        .build();

predictor = new FritzVisionLivingRoomSegmentPredictor(this, options);

List<FritzVisionSegment> segments = segmentPredictor.predict(visionImage);

The resulting list of segments will contain 3 classes: Wall, Window, and None.