Dataset Collection System

../_images/dataset_collection_header.jpg

The Dataset Collection System enables developers to capture images, model predictions, and user-generated annotations directly from their mobile app.

Core Concepts

Data is a critical component to all machine learning workflows. And the best data comes from the real world, where ML models actually make their predictions. For mobile machine learning, that means the device itself. The Dataset Collection System creates a data feedback loop, where developers can work with users to see what their models predict, what those users expect models to predict, and the difference between the two.

Below are some core concepts to help you get started.

User-generated Image

User-generated images are captured during real-world use of your application. Because they are collected from the end user, they are most representative of how your model is used in production, making them extremely valuable for monitoring model accuracy and retraining models over time.

Model-predicted Annotations

Model-predicted annotations are annotations (keypoints, bounding boxes, etc.) generated from predictions made with user-generated images. By pairing images with model predictions, developers and product managers gain full visibility into what users experience when using an app in production.

User-generated Annotations

User-generated annotations are annotations (keypoints, bounding boxes, etc.) submitted by a user. They represent the model output a user expected. User-generated annotations complement model-predicted annotations by allowing app makers to measure the gap between actual and expected behavior. User-generated annotations also function as ground-truth data for model retraining.

User-generated Image Collection

A user-generated image collection stores all images, model predictions, and user-generated annotations associated with a single model.

Note

Privacy is important and one of the main benefits of on-device machine learning. When collecting data from users, always make sure you have their explicit permission.

Getting Started

1. Register your model with the Fritz SDK

Once you have created a Fritz AI account and been granted access, you’ll need to upload a model and register it with the SDK. Instructions for implementing a custom pose estimation model on iOS can be found in iOS Custom Pose Estimation.

2. Create a User-generated Image Collection

../_images/dataset_collection_create_ugc.jpg

From the Datasets tab in the webapp, select ADD A USER COLLECTION. You will be asked to provide a name, description, and select a model to collect data from. In order to ensure that all annotations match, each collection can only be associated with one model. The annotation configuration (e.g. the number of keypoints in pose estimation) will be inferred automatically when data is first collected.

Note

If an annotation with a different configuration is sent to a user-generated image collection, any missing objects are added.

3. Use the record method on the predictor to collect data

The FritzVisionPosePredictor used to make predictions has a record method allowing you to send an image, a model-predicted annotation, and a user-generated annotation back to your Fritz AI account.

guard let result = try? poseModel.predict(image, options: options),
  let pose = result.pose() else {}

// code allowing users to create a modifiedPose

poseModel.record(image, predicted: pose, modified: modifiedPose)

4. Inspect collected images and annotations in the browser

../_images/dataset_collection_view.jpg

Images and annotations can be viewed in the user-generated image collection created in the browser. Select a given image to see additional details and switch between model-predicted and user-generated annotations. Click the CREATE DATASET button to create a COCO-formatted export of your collection that can be used for measuring model accuracy or retraining.