Style Transfer on iOS

Note

If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.

You can use the FritzVisionStylePredictor to stylize images. Fritz AI provides a variety of options to configure predictions. These instructions will help you get Style Transfer running in your app in no time.

1. Build the FritzVisionStylePredictor

To create the style model, you can either include the model in your bundle or download it over the air once the user installs your app.

Include styles in your app bundle

Fritz AI provides packs of gorgeous pre-trained style models. Choose from styles of famous paintings, intricate patterns, or both.

Add the models to your Podfile

Use styles of famous paintings. For previews, visit the About page.

  • Bicentennial Print from America: The Third Century by Roy Lichtenstein
  • Les Femmes d’Alger by Picasso
  • Head of a Clown
  • Horses on the Seashore by Giorgio de Chirico
  • Kaleidoscope
  • Pink and Blue Rhombuses
  • The Poppy Field by Claude Monet
  • Ritmo Plastico by Gino Severini
  • Starry Night by Vincent Van Gogh
  • The Scream by Edvard Munch
  • The Trial by Sidney Nolan

To use these style models, include the Fritz/VisionStyleModel/Paintings pod in your Podfile.

pod 'Fritz/VisionStyleModel/Paintings'

Make sure to install the recent addition.

pod install

Note

If you’ve built the app with just the core Fritz pod and add a new submodule for the model, you may encounter an error “Cannot invoke initializer for type”. To fix this, run a pod update and clean your XCode build to resolve the issue.

Use styles of intricate patterns. For previews, visit the About page.

  • Blue Arrow
  • Comic
  • Filament
  • Green Blocks
  • Lamp Post
  • Mosaic
  • Notre Dame
  • Shades
  • Sketch
  • Snowflake
  • Sprinkles
  • Swirl
  • Tile
  • Vector

To use these style models, include the Fritz/VisionStyleModel/Patterns pod in your Podfile.

pod 'Fritz/VisionStyleModel/Patterns'

Make sure to install the recent addition.

pod install

Note

If you’ve built the app with just the core Fritz pod and add a new submodule for the model, you may encounter an error “Cannot invoke initializer for type”. To fix this, run a pod update and clean your XCode build to resolve the issue.

Define FritzVisionStylePredictor

Choose which FritzVisionStylePredictor you want to use. You can choose a specific style or access a list of all models. There should only be one instance of the model that is shared across all predictions:

Using a specific style:

import Fritz
// Painting model
lazy var paintingModel = PaintingStyleModel.Style.starryNight.build()

// Pattern model
lazy var patternModel = PatternStyleModel.Style.filament.build()

Using all styles:

import Fritz
// For painting models
let paintingModels = FritzVisionStylePredictor.allPaintingModels()

// For pattern models
let patternModels = FritzVisionStylePredictor.allPatternModels()

Using a specific style:

@import Fritz;
FritzVisionStylePredictor *paintingModel = [PaintingStyleModel buildForPainting:starryNight];

FritzVisionStylePredictor *patternModel = [PatternStyleModel buildForPattern:filament];

Using all styles:

@import Fritz;
NSArray *paintingModels = [FritzVisionStylePredictor allPaintingModels];

NSArray *patternModels = [FritzVisionStylePredictor allPatternModels];

Note

Model initialization

It’s important to intialize one instance of the model so you are not loading the entire model into memory on each model execution. Usually this is a property on a ViewController. When loading the model in a ViewController, the following ways are recommended:

Lazy-load the model

By lazy-loading model, you won’t load the model until the first prediction. This has the benefit of not prematurely loading the model, but it may make the first prediction take slghtly longer.

class MyViewController: UIViewController {
  lazy var model = FritzVisionHumanPoseModelFast()
}

Load model in viewDidLoad

By loading the model in viewDidLoad, you’ll ensure that you’re not loading the model before the view controller is loaded. The model will be ready to go for the first prediction.

class MyViewController: UIViewController {
  let model: FritzVisionHumanPoseModelFast!

  override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    model = FritzVisionHumanPoseModelFast()
  }
}

Alternatively, you can initialize the model property directly. However, if the ViewController is instantiated by a Storyboard and is the Initial View Controller, the properties will be initialized before the appDelegate function is called. This can cause the app to crash if the model is loaded before FritzCore.configure() is called.

Using custom style models

You can also create custom style models. See Create and manage custom style models for how to create your own style model.

Add FritzVision to your Podfile

To use these style models, include the Fritz/Vision pod in your Podfile.

pod 'Fritz/Vision'

Make sure to install the recent addition.

pod install

Define FritzVisionStylePredictor

Assume you have a custom style model named CustomStyleModel:

import Fritz

let styleModel = FritzVisionStylePredictor(model: CustomStyleModel())
@import Fritz;

FritzVisionStylePredictor *styleModel = [[FritzVisionStylePredictor alloc] initWithIdentifiedModel:[CustomStyleModel new]];

2. Create FritzVisionImage

FritzVisionImage supports different image formats.

  • Using a CMSampleBuffer

    If you are using a CMSampleBuffer from the built-in camera, first create the FritzVisionImage instance:

    let image = FritzVisionImage(buffer: sampleBuffer)
    
    FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithBuffer: sampleBuffer];
    // or
    FritzVisionImage *visionImage = [[FritzVisionImage alloc] initWithImage: uiImage];
    

    The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image. By default, if you specify FritzVisionImageMetadata the orientation will be .right:

    image.metadata = FritzVisionImageMetadata()
    image.metadata?.orientation = .left
    
    // Add metdata
    visionImage.metadata = [FritzVisionImageMetadata new];
    visionImage.metadata.orientation = FritzImageOrientationLeft;
    

    Note

    Data passed in from the camera will generally need the orientation set. When using a CMSampleBuffer to create a FritzVisionImage the orientation will change depending on which camera and device orientation you are using.

    When using the back camera in the portrait Device Orientation, the orientation should be .right (the default if you specify FritzVisionImageMetadata on the image). When using the front facing camera in portrait Device Orientation, the orientation should be .left.

    You can initialize the FritzImageOrientation with the AVCaptureConnection to infer orientation (if the Device Orientation is portrait):

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        let image = FritzVisionImage(sampleBuffer: sampleBuffer, connection: connection)
        ...
    }
    
  • Using an UIImage

    If you are using an UIImage, create the FritzVisionImage instance:

    let image = FritzVisionImage(image: uiImage)
    

    The image orientation data needs to be properly set for predictions to work. Use FritzImageMetadata to customize orientation for an image:

    image.metadata = FritzVisionImageMetadata()
    image.metadata?.orientation = .right
    

    Note

    UIImage can have associated UIImageOrientation data (for example when capturing a photo from the camera). To make sure the model is correctly handling the orientation data, initialize the FritzImageOrientation with the image’s image orientation:

    image.metadata?.orientation = FritzImageOrientation(image.imageOrientation)
    

3. Run style transfer

Stylize images

guard let stylizedBuffer = try? styleModel.predict(image) else { return }

// Code to work with stylized image here.
// If you're not sure how to use the output image,
// check out the public Fritz AI Studio project.
[styleModel predict:image completion:^(CVPixelBufferRef result, NSError *error) {
  // Code to work with stylized image here.
  // If you're not sure how to use the output image,
  // check out the public Fritz AI Studio project.
}];

Configure style transfer model

Before running style transfer, you can configure the prediction with a FritzVisionStyleModelOptions object.

Settings
imageCropAndScaleOption

.scaleFit (default)

Crop and Scale option for how to resize and crop the image for the model

resizeOutputToInputDimensions

false (default)

If true, resizes output image to the dimensions of the input image.

flexibleModelDimensions

1280x720 (default)

The input dimensions for a flexible model.

For example, you can build a model that will crop the image to the input size (640x480)

let options = FritzVisionStyleModelOptions()
options.imageCropAndScaleOption = .centerCrop

// Use the options in model prediction.
guard let stylizedBuffer = try? styleModel.predict(image, options: options) else { return }
FritzVisionStyleModelOptions * options = [FritzVisionStyleModelOptions new];
options.imageCropAndScaleOption = FritzVisionCropAndScaleCenterCrop;

Note

Style models can be classified as inflexible or flexible.

Inflexible models are trained with input of a fixed size, and as such, can only output results at specific sizes.

Flexible models are trained with input of various sizes and are able to output results at a range of sizes. Lowering the output size results in greater performance at the cost of reduced quality.

You can specify output dimensions when using a flexible model by adding it to the prediction options.

let options = FritzVisionStyleModelOptions()
options.flexibleModelDimensions = FlexibleModelDimensions(width: 1080, height: 2560)
FritzVisionStyleModelOptions * options = [FritzVisionStyleModelOptions new];
options.flexibleModelDimensions = [FlexibleModelDimentions width:1080 height:2560];

When building your model, you can make it flexible by allowing different sized training inputs to be accepted. For help with this step, please feel free to contact us.

Create and manage custom style models

Train your own style model using the Fritz Style Training Template

If you want to train and use your own custom style transfer model, follow the open source training template on Google Colab.

For a full tutorial on training your custom model, take a look at this blog post on Heartbeat.

After you’ve finished training your model, follow these steps to add it to your iOS app.

1. Create a custom model in the webapp and add to your Xcode project.

For instructions on how to do this, see Core ML

2. Update your podfile:

pod 'Fritz/Vision'
pod install

3. Conform your model

For the following instructions, assume your model is named CustomStyleModel.mlmodel.

Create a new file called CustomStyleModel+Fritz.swift and conform your class like so:

import Fritz

extension CustomStyleModel: SwiftIdentifiedModel {

    static let modelIdentifier = "model-id-abcde"

    static let packagedModelVersion = 1
}

Create a new extension called CustomStyleModel+Fritz and conform your class like so:

// CustomStyleModel+Fritz.h
@import Fritz;

@interface CustomStyleModel(Fritz) <FritzObjcIdentifiedModel>
@end
// CustomStyleModel+Fritz.m
#import "CustomStyleModel+Fritz.h"

@implementation CustomStyleModel(Fritz)
+ (NSString * _Nonnull)fritzModelIdentifier {
    return @"model-id-abcde";
}

+ (NSInteger)fritzPackagedModelVersion {
    return 1;
}
@end

Fetch custom style models by tag

If you create your own styles using the Fritz Style Training Template, you can Add Tags to those models and dynamically load them in the SDK.

When loading style models by tag, you do not need to include them in your bundle:

import Fritz

FritzVisionStylePredictor.fetchStyleModelsForTags(tags: ["my-custom-style-models", "impressionists"]) { models, error in
    guard let styleModels = models, error == nil else {
        return
    }
    // Instantiated models live here.
}

Note

If you have many Style Models for a given tag query, loading all models into memory could cause memory pressure. In this case, you can load the models on demand.

1. Get FritzManagedModels from ModelTagManager

For detailed instructions on how to get the list of FritzManagedModel, refer to Querying Tags in the iOS SDK.

2. Instantiate FritzStyleModel from FritzManagedModel

You should get a FritzManagedModel list containing all models matching your tag query. From those models, you can instantiate a single instance of the style model as you need.

import Fritz

var styleModel: FritzVisionStylePredictor?

// Get managed models for tags that have been previously loaded..
let tagManager = ModelTagManager(["style-transfer", "custom-models"])
let managedModels = tagManager.getManagedModelsforTags()

for model in managedModels {
    managedModel.fetchModel { mlmodel, error in
        guard let fritzMLModel = mlmodel, error == nil else {
            return
        }
        try {
            styleModel = try FritzVisionStylePredictor(model: fritzMLModel)
        } catch {
            // Errors may occur if the style model does not contain the appropriate inputs and outputs.
        }
}

In this example, the single Style Model is replaced by each model as it iterates through the for loop, but you could have a user choose from a list of options returned from the tag query.

Access model metadata

Metadata is easily accessible on the FritzVisionStylePredictor object. To add metadata to models, read Adding Metadata to Models:

import Fritz

// Assume you have an instantiated Style Transfer model
let styleModel = FritzVisionStylePredictor(model: ...)
let metadata: [String:String]? = styleModel.metadata

Additional style model resources