TensorFlow Lite

Currently we support TensorFlow Lite for devices running Android 5.0 (Lollipop, SDK version 21) and higher.

Note

If you haven’t set up the SDK yet, make sure to go through those directions first. You’ll need to add the Core library to the app before using the specific feature API or custom model. Follow iOS setup or Android setup directions.


1. Add Fritz’s TFLite Library

Edit your app/build.gradle file:

Allow libraries from our repository:

repositories {
    maven { url "https://raw.github.com/fritzlabs/fritz-repository/master" }
}

Add Fritz to your gradle dependencies and resync the project:

dependencies {
    implementation 'ai.fritz:core:3+'
    implementation 'ai.fritz:custom-model-tflite:3+'
}

2. Register the FritzCustomModelService in your Android Manifest

In order to monitor and update your models, we’ll need to register the JobService to listen for changes and handle over the air (OTA) updates.

To do this, open up your AndroidManifest.xml and add these lines:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="ai.fritz.fritzsdkapp">
    /*
      ----------------------------------------------
         Fritz SDK internet permissions
      ----------------------------------------------
    */
    <uses-permission android:name="android.permission.INTERNET" />

    <application>
        ...
        /*
          ------------------------------------
            Register FritzCustomModelService
          ------------------------------------
         */
        <service
            android:name="ai.fritz.core.FritzCustomModelService"
            android:exported="true"
            android:permission="android.permission.BIND_JOB_SERVICE" />
         /*
          -----------------------------
                      END
          -----------------------------
         */
    </application>
</manifest>

3. Creating an interpreter for your model.

You can either include your model with the app or load the model when the app runs (recommended to reduce the size of your APK).

To find your model id, go to Your Project > Project Settings > Your App > Show API Key in the webapp.

To include your model on-device:
  • Add your TensorFlow Lite model (.tflite) to your project’s assets folder.

  • Create a new FritzOnDeviceModel class linking to your included model

    FritzOnDeviceModel onDeviceModel = new FritzOnDeviceModel("file:///android_asset/mnist.tflite", "<your model id>", 1);
    FritzTFLiteInterpreter interpreter = new FritzTFLiteInterpreter(onDeviceModel);
    

    You may also include a separate class (you can download this class from the webapp under Custom Model > Your Model > SDK Instructions)):

    public class MnistCustomModel extends FritzOnDeviceModel {
        private static final String MODEL_PATH = "file:///android_asset/mnist.tflite";
        private static final String MODEL_ID = "<your model id>";
        private static final int MODEL_VERSION = 1;
    
        public MnistCustomModel(Context context) {
            super(MODEL_PATH, MODEL_ID, MODEL_VERSION);
        }
    }
    
    FritzTFLiteInterpreter interpreter = new FritzTFLiteInterpreter(new MnistCustomModel());
    
To download your model at runtime:
  • Create a FritzManagedModel with the model id and then use this to call loadModel to load an on-device model.

    FritzTFLiteInterpreter tflite;
    
    FritzManagedModel managedModel = new FritzManagedModel("your model id");
    FritzModelManager modelManager = new FritzModelManager(managedModel);
    modelManager.loadModel(new ModelReadyListener() {
        @Override
        public void onModelReady(FritzOnDeviceModel onDeviceModel) {
            tflite = new FritzTFLiteInterpreter(onDeviceModel);
            Log.d(TAG, "Interpreter is now ready to use");
        }
    });
    

    Note

    To download the model using WIFI only, please specify this by adding the useWifi parameter to loadModel (by default, the model can be downloaded over any network connection).

    modelManager.loadModel(new ModelReadyListener() {
        @Override
        public void onModelReady(FritzOnDeviceModel onDeviceModel) {
            tflite = new FritzTFLiteInterpreter(onDeviceModel);
            Log.d(TAG, "Interpreter is now ready to use");
        }
    }, true);
    

4. Run predictions with the interpeter.

From here, you can run predictions with the interpreter once it’s ready for use.

// Run prediction with input / output buffers
tflite.run(inputBuffer, outputBuffer);

Note

Under the hood, FritzTFLiteInterpreter provides the same methods as TFLite’s Interpreter.

5. Track the performance of your model

Run your app with the appropriate code in order to track model performance. Afterwards, you can see the measurements appear in the webapp under Dashboard:

Average time / prediction (ms) chart