Style Transfer

note

If you haven't set up the SDK yet, make sure to go through those directions first. You'll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.

Style Transfer In Action

Use Style Transfer to bring real-time artistic style transfer to your apps. Transform photos with beautiful patterns or turn them into masterpieces painted by history’s greatest artists. Models run in real-time so you can create great experiences.

Custom Training Models for Style Transfer

You can train a custom model that is compatible with the Style Transfer API by using Studio.

Pre-trained Models

Choose from 25 different pre-trained styles. These styles are trained on famous paintings and patterns.

Paintings

StyleInputOutput
Bicentennial Print from America: The Third Century by Roy Lichtenstein
Les Femmes d'Alger by Picasso
Head of a Clown
Horses on the Seashore by Giorgio de Chirico
The Poppy Field by Claude Monet
Ritmo Plastico by Gino Severini
Starry Night by Vincent Van Gogh
The Scream by Edvard Munch
The Trial by Sidney Nolan

Patterns

StyleInputOutput
Blue Arrow
Comic
Filament
Green Blocks
Kaleidoscope
Lamp Post
Mosaic
Notre Dame
Pink and Blue Rhombuses
Shades
Sketch
Snowflake
Sprinkles
Swirl
Tile
Vector

Technical Specifications

ArchitectureFormat(s)Model SizeInputOutputBenchmarks
Fast Style TransferCore ML (iOS), TensorFlow Lite (Android)17 KB (8-bit quantization); 467 KB (stabilized)Arbitrary size images (iOS12); 640x480-pixel imageStylized image28 FPS on iPhone X, 2 FPS on Pixel 2

Stable Style Transfer

For video, visually stabilized style transfer creates beautiful continuity between frames.

Android

Fritz AI provides an Android API that you can use to transform images or live video into beautiful works of art. You can choose to stylize an image from the photo gallery or transform live video captured by the device's camera. Follow these simple instructions in order to bring style transfer to your app in no time.

Follow these directions in order to apply one of the pre-trained styles to your app.

1. Add the dependencies via Gradle

Add our repository in order to download the Vision API:

repositories {
maven { url "https://fritz.mycloudrepo.io/public/repositories/android" }
}

Add renderscript support and include the vision dependency in app/build.gradle. Renderscript is used in order to improve image processing performance. You'll also need to specify aaptOptions in order to prevent compressing TensorFlow Lite models.

android {
defaultConfig {
renderscriptTargetApi 21
renderscriptSupportModeEnabled true
}
// Don't compress included TensorFlow Lite models on build.
aaptOptions {
noCompress "tflite"
}
}
dependencies {
implementation 'ai.fritz:vision:+'
}

(Optional include model in your app) To include |FeatureName| model with your build, then you'll need to add the dependency as shown below. Note: This includes the model with your app when you publish it to the play store and will increase your app size.

dependencies {
implementation 'ai.fritz:vision-style-painting-models:{ANDROID_MODEL_VERSION}'
}

Now you're ready to transform images with the |FeatureName| API.

2. Get a style predictor

In order to use the predictor, the on-device model must first be loaded. If you followed the optional step above and included the ai.fritz:vision-style-painting-models dependency, you can get a predictor to use immediately:

// For painting
PaintingStyleModels paintingModels = FritzVisionModels.getPaintingStyleModels();
// For patterns
PatternStyleModels patternModels = FritzVisionModels.getPatternStyleModels();
FritzOnDeviceModel styleOnDeviceModel = paintingModels.getStarryNight();
FritzVisionStylePredictor predictor = FritzVision.StyleTransfer.getPredictor(styleOnDeviceModel);

If you did not include the on-device model with your app, you'll have to load the model before you can get a predictor. To do that, you'll use StyleManagedModel and call FritzVision.StyleTransfer.loadPredictor to start the model download.

note

Use one of the managed models for the style you'd like to use:

// For Painting models to download
PaintingManagedModels.BICENTENNIAL_PRINT_MANAGED_MODEL
PaintingManagedModels.FEMMES_MANAGED_MODEL
PaintingManagedModels.HEAD_OF_CLOWN_MANAGED_MODEL
PaintingManagedModels.HORSES_ON_SEASHORE_MANAGED_MODEL
PaintingManagedModels.KALEIDOSCOPE_MANAGED_MODEL
PaintingManagedModels.PINK_BLUE_RHOMBUS_MANAGED_MODEL
PaintingManagedModels.POPPY_FIELD_MANAGED_MODEL
PaintingManagedModels.RITMO_PLASTICO_MANAGED_MODEL
PaintingManagedModels.STARRY_NIGHT_MANAGED_MODEL
PaintingManagedModels.THE_SCREAM_MANAGED_MODEL
PaintingManagedModels.THE_TRAIL_MANAGED_MODEL
// For Pattern models to download
PatternManagedModels.BLUE_ARROW_MANAGED_MODEL
PatternManagedModels.CHRISTMAS_LIGHTS_MANAGED_MODEL
PatternManagedModels.COMIC_MANAGED_MODEL
PatternManagedModels.FILAMENT_MANAGED_MODEL
PatternManagedModels.LAMP_POST_MANAGED_MODEL
PatternManagedModels.MOSAIC_MANAGED_MODEL
PatternManagedModels.NOTRE_DAME_MANAGED_MODEL
PatternManagedModels.SHADES_MANAGED_MODEL
PatternManagedModels.SKETCH_MANAGED_MODEL
PatternManagedModels.SNOWFLAKE_MANAGED_MODEL
PatternManagedModels.SPRINKLES_MANAGED_MODEL
PatternManagedModels.SWIRL_MANAGED_MODEL
PatternManagedModels.TILE_MANAGED_MODEL
PatternManagedModels.VECTOR_MANAGED_MODEL
FritzVisionStylePredictor predictor;
FritzManagedModel managedModel = PaintingManagedModels.STARRY_NIGHT_MANAGED_MODEL;
FritzVision.StyleTransfer.loadPredictor(managedModel, new PredictorStatusListener<FritzVisionStylePredictor>() {
@Override
public void onPredictorReady(FritzVisionStylePredictor stylePredictor) {
Log.d(TAG, "Style Transfer predictor is ready");
predictor = stylePredictor;
}
});

3. Create a FritzVisionImage from an image or a video stream

To create a FritzVisionImage from a Bitmap:

FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap);

To create a FritzVisionImage from a media.Image object when capturing the result from a camera, first determine the orientation of the image. This will rotate the image to account for device rotation and the orientation of the camera sensor.

// Get the system service for the camera manager
final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
// Gets the first camera id
String cameraId = manager.getCameraIdList().get(0);
// Determine the rotation on the FritzVisionImage from the camera orientaion and the device rotation.
// "this" refers to the calling Context (Application, Activity, etc)
ImageRotation imageRotationFromCamera = FritzVisionOrientation.getImageRotationFromCamera(this, cameraId);

Finally, create the FritzVisionImage object with the rotation

FritzVisionImage visionImage = FritzVisionImage.fromMediaImage(image, imageRotationFromCamera);

4. Run prediction on FritzVisionImage

Apply a style to your FritzVisionImage to get a FritzVisionStyleResult object.

FritzVisionStyleResult styleResult = stylePredictor.predict(visionImage);

The predict method returns back a FritzVisionStyleResult object that contains the following methods:

FritzVisionStyleResult methods

MethoodDescription
Bitmap toBitmap()Get the stylized image as a bitmap with the same dimensions as the model input resolution.
Bitmap toBitmap(Size scaleToSize)Get the stylized image as a bitmap with the same dimensions as the model input resolution.
void drawToCanvas(Canvas canvas)Draw the styled image to the canvas.
void drawToCanvas(Canvas canvas, Size canvasSize)Draw the styled image and scale it up to the specified size.

5. Access the Style Result

FritzVisionStyleResult contains several convenience methods to help draw the image.

Drawing the styled image to a canvas

// Draw the result to the canvas (same size as the input image)
styleResult.drawToCanvas(canvas);

Accessing the bitmap

// Acccess the styled bitmap
Bitmap styledBitmap = styleResult.toBitmap();

Access a scaled bitmap

// Target canvas size
Size targetSize = new Size(2048, 2048);
Bitmap bitmap = styleResult.toBitmap(targetSize);

Advanced Options

Configuring the Predictor

You can configure the predictor with FritzVisionStylePredictorOptions to return specific results that match the options given:

FritzVisionStylePredictorOptions methods

OptionDefaultDescription
confidenceThreshold.3fReturn labels above the confidence threshold
targetClassesDefined by the segmentation modelThe set of classes the model will look for

In order to change model performance for different devices, you may also expose the underlying TensorFlow Lite Interpreter options.

FritzVisionPredictorOptions methods

OptionDefaultDescription
useGPUfalseReturn labels above the confidence threshold. Please note, this is an experimental option and should not be used in production apps.
useNNAPIfalseUses the NNAPI for running model inference. Please note, this is an experimental option and should not be used in production apps.
numThreads2For CPU Only, run model inference using the specified number of threads

For more details, please visit the Official TensorFlow Lite documentation.

Example (Crop and scale option):

By setting FritzVisionCropAndScale on FritzVisionStylePredictorOptions, you can control preprocessing on the image.

.. include:: /shared/android-crop-options.rst

Customize

If you want to train and use your own custom style transfer model, follow the [open source training template on Google Colab)[https://colab.research.google.com/drive/1nDkxLKBgZGFscGoF0tfyPMGqW03xITl0).

For a full tutorial on training your custom model, take a look at this blog post on Heartbeat.

After you've finished training your model, follow these steps to add it to your Android app.

  1. Add your optimized TensorFlow Lite model (.tflite) to your app's assets folder.
  2. Upload your custom model to Fritz's webapp.
  3. Use the model id from Step 2 and initialize a predictor to use in your app.
FritzOnDeviceModel onDeviceModel = new FritzOnDeviceModel("file:///android_asset/custom_style_transfer_graph.tflite", "<Your model id from step 2>", 1);
predictor = FritzVision.StyleTransfer.getPredictor(onDeviceModel);
  1. Use your predictor the same way as described above with predictor.predict(visionImage).

iOS

1. Build the FritzVisionStylePredictor

To create the style model, you can either include the model in your bundle or download it over the air once the user installs your app.

Include styles in your app bundle

Fritz AI provides packs of gorgeous pre-trained style models. Choose from styles of famous paintings, intricate patterns, or both.

Add the models to your Podfile

To use these style models, include the Fritz/VisionStyleModel/Paintings pod in your Podfile.

pod 'Fritz/VisionStyleModel/Paintings'

Make sure to install the recent addition.

pod install
note

If you've built the app with just the core Fritz pod and add a new submodule for the model, you may encounter an error "Cannot invoke initializer for type". To fix this, run a pod update and clean your XCode build to resolve the issue.

Define FritzVisionStylePredictor

Choose which FritzVisionStylePredictor you want to use. You can choose a specific style or access a list of all models. There should only be one instance of the model that is shared across all predictions:

Using a specific style:

import Fritz
// Painting model
lazy var paintingModel = PaintingStyleModel.Style.starryNight.build()
// Pattern model
lazy var patternModel = PatternStyleModel.Style.filament.build()

Using all styles:

import Fritz
// For painting models
let paintingModels = FritzVisionStylePredictor.allPaintingModels()
// For pattern models
let patternModels = FritzVisionStylePredictor.allPatternModels()
Model initialization

It's important to intialize one instance of the model so you are not loading the entire model into memory on each model execution. Usually this is a property on a ViewController. When loading the model in a ViewController, the following ways are recommended:

Lazy-load the model

By lazy-loading model, you won't load the model until the first prediction. This has the benefit of not prematurely loading the model, but it may make the first prediction take slghtly longer.

class MyViewController: UIViewController {
lazy var model = FritzVisionHumanPoseModelFast()
}

Load model in viewDidLoad

By loading the model in viewDidLoad, you'll ensure that you're not loading the model before the view controller is loaded. The model will be ready to go for the first prediction.

class MyViewController: UIViewController {
let model: FritzVisionHumanPoseModelFast!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
model = FritzVisionHumanPoseModelFast()
}
}

Alternatively, you can initialize the model property directly. However, if the ViewController is instantiated by a Storyboard and is the Initial View Controller, the properties will be initialized before the appDelegate function is called. This can cause the app to crash if the model is loaded before FritzCore.configure() is called.

Add FritzVision to your Podfile

To use these style models, include the Fritz/Vision pod in your Podfile.

pod 'Fritz/Vision'

Make sure to install the recent addition.

pod install

Define FritzVisionStylePredictor

Assume you have a custom style model named CustomStyleModel:

import Fritz
let styleModel = FritzVisionStylePredictor(model: CustomStyleModel())

2. Create FritzVisionImage

FritzVisionImage supports different image formats.

Using a CMSampleBuffer

If you are using a CMSampleBuffer from the built-in camera, first create the FritzVisionImage instance:

let image = FritzVisionImage(buffer: sampleBuffer)
FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithBuffer: sampleBuffer];
// or
FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithImage: uiImage];

The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image. By default, if you specify FritzVisionImageMetadata the orientation will be .right:

image.metadata = FritzVisionImageMetadata()
image.metadata?.orientation = .left
// Add metdata
visionImage.metadata = [FritzVisionImageMetadata new];
visionImage.metadata.orientation = FritzImageOrientationLeft;
Setting the Orientation from the Camera

Data passed in from the camera will generally need the orientation set. When using a CMSampleBuffer to create a FritzVisionImage the orientation will change depending on which camera and device orientation you are using.

When using the back camera in the portrait Device Orientation, the orientation should be .right (the default if you specify FritzVisionImageMetadata on the image). When using the front facing camera in portrait Device Orientation, the orientation should be .left.

You can initialize the FritzImageOrientation with the AVCaptureConnection to infer orientation (if the Device Orientation is portrait):

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let image = FritzVisionImage(sampleBuffer: sampleBuffer, connection: connection)
...
}

Using an UIImage

If you are using an UIImage, create the FritzVisionImage instance:

let image = FritzVisionImage(image: uiImage)

The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image:

image.metadata = FritzVisionImageMetadata()
image.metadata?.orientation = .right
Set the image orientation

UIImage can have associated UIImageOrientation data (for example when capturing a photo from the camera). To make sure the model is correctly handling the orientation data, initialize the FritzImageOrientation with the image's image orientation:

image.metadata?.orientation = FritzImageOrientation(image.imageOrientation)

3. Run style transfer

guard let stylizedBuffer = try? styleModel.predict(image) else { return }
// Code to work with stylized image here.
// If you're not sure how to use the output image,
// check out the public Fritz AI Studio project.

Configure style transfer model

Before running style transfer, you can configure the prediction with a FritzVisionStyleModelOptions object.

Settings

OptionDescription
imageCropAndScaleOption.scaleFit (default)
Crop and Scale option for how to resize and crop the image for the model.
resizeOutputToInputDimensionsfalse (default)
If true, resizes output image to the dimensions of the input image.
flexibleModelDimensions1280x720 (default)
The input dimensions for a flexible model.

For example, you can build a model that will crop the image to the input size (640x480)

let options = FritzVisionStyleModelOptions()
options.imageCropAndScaleOption = .centerCrop
// Use the options in model prediction.
guard let stylizedBuffer = try? styleModel.predict(image, options: options) else { return }
note

Style models can be classified as inflexible or flexible.

  • Inflexible models are trained with input of a fixed size, and as such, can only output results at specific sizes.
  • Flexible models are trained with input of various sizes and are able to output results at a range of sizes. Lowering the output size results in greater performance at the cost of reduced quality.

You can specify output dimensions when using a flexible model by adding it to the prediction options.

let options = FritzVisionStyleModelOptions()
options.flexibleModelDimensions = FlexibleModelDimensions(width: 1080, height: 2560)

When building your model, you can make it flexible by allowing different sized training inputs to be accepted.

Customize

Train your own style model using the Fritz Style Training Template

If you want to train and use your own custom style transfer model, follow the open source training template on Google Colab. For a full tutorial on training your custom model, take a look at this blog post on Heartbeat. After you've finished training your model, follow these steps to add it to your iOS app.

1. Create a custom model in the webapp and add to your Xcode project.

For instructions on how to do this, see :ref:core_ml_ref

2. Update your podfile:
pod 'Fritz/Vision'
pod install
3. Conform your model

For the following instructions, assume your model is named CustomStyleModel.mlmodel.

CustomStyleModel+Fritz.swift
import Fritz
extension CustomStyleModel: SwiftIdentifiedModel {
static let modelIdentifier = "model-id-abcde"
static let packagedModelVersion = 1
}

Fetch custom style models by tag

import Fritz
FritzVisionStylePredictor.fetchStyleModelsForTags(tags: ["my-custom-style-models", "impressionists"]) { models, error in
guard let styleModels = models, error == nil else {
return
}
// Instantiated models live here.
}
Only available on Growth plans

For more information on plans and pricing, visit our website.

When loading style models by tag, you do not need to include them in your bundle:

import Fritz
FritzVisionStylePredictor.fetchStyleModelsForTags(tags: ["my-custom-style-models", "impressionists"]) { models, error in
guard let styleModels = models, error == nil else {
return
}
// Instantiated models live here.
}
note

If you have many Style Models for a given tag query, loading all models into memory could cause memory pressure. In this case, you can load the models on demand.

  1. Get FritzManagedModels from ModelTagManager

For detailed instructions on how to get the list of FritzManagedModel, refer to :ref:Querying Tags in the iOS SDK<querying_tags>.

  1. Instantiate FritzStyleModel from FritzManagedModel

You should get a FritzManagedModel list containing all models matching your tag query. From those models, you can instantiate a single instance of the style model as you need.

import Fritz
var styleModel: FritzVisionStylePredictor?
// Get managed models for tags that have been previously loaded..
let tagManager = ModelTagManager(["style-transfer", "custom-models"])
let managedModels = tagManager.getManagedModelsforTags()
for model in managedModels {
managedModel.fetchModel { mlmodel, error in
guard let fritzMLModel = mlmodel, error == nil else {
return
}
try {
styleModel = try FritzVisionStylePredictor(model: fritzMLModel)
} catch {
// Errors may occur if the style model does not contain the appropriate inputs and outputs.
}
}

In this example, the single Style Model is replaced by each model as it iterates through the for loop, but you could have a user choose from a list of options returned from the tag query.

Access model metadata

Metadata is easily accessible on the FritzVisionStylePredictor object. To add metadata to models, read :ref:Adding Metadata to Models<adding_metadata>:

import Fritz
// Assume you have an instantiated Style Transfer model
let styleModel = FritzVisionStylePredictor(model: ...)
let metadata: [String:String]? = styleModel.metadata

Additional Resources

For a full walkthrough on how to implement style transfer, follow this tutorial on Heartbeat.