Benchmark Model Performance

The Benchmark utility analyzes your model and layer construction to estimate mobile performance.

Example Model Grade Report (a few layers not pictured)

Supported Platforms

Currently, benchmarking is only supported for Keras models. It provides a compatibility report and runtime prediction for Core ML. If you are looking to understand other frameworks, please :doc:contact us </help-center/index> and we will be happy to help.

Benchmark using the Fritz CLI

Using the Fritz CLI, you can easily benchmark a Keras Model.

If you have not done so, please :ref:setup_python_library_ref before continuing.

The report will summarize all model layers and give an estimated runtime:

$ fritz model benchmark <path to keras model.h5>

To get an existing grade report, specify the version uid of a previously uploaded model:

$ fritz model benchmark <path to keras model.h5>
Fritz Model Grade Report
Core ML Compatible: True
Predicted Runtime (iPhone X): 31.4 ms (31.9 fps)
Total MFLOPS: 686.90
Total Parameters: 1,258,580
Fritz Version ID: <Version UID>
$ fritz model benchmark --version-uid <Version UID>

Benchmark inside Python code

You can easily benchmark models inside of your Python code.

If you have not done so, please :ref:setup_python_library_ref before continuing.

1. Load the Keras Model and create a KerasFile object

First you must load the Keras model into memory and create a KerasFile object. The KerasFile is a subclass of FrameworkFileBase which provides a standard interface for serializing and deserialzing models from various frameworks.

import keras
import fritz
model = ... # a loaded keras model
fritz_keras_file = fritz.frameworks.KerasFile(model)

2. Create a new ModelVersion

Next, you will upload the Keras model you wish to benchmark to Fritz. This will trigger the benchmarking process.

filename = "<Name of saved keras file.h5>"
model_version = fritz.ModelVersion.create(

3. Run Benchmark

Finally, run model benchmarking and view the summary. This will print out the report.

grade_report = model_version.benchmark()