The Fritz AI Dataset Generator creates robust, labeled image datasets from only a handful of seed images. Jumpstart your mobile machine learning project or improve performance of an existing model without collecting and labeling thousands of images by hand.
The dataset generator works by compositing a small set of seed images on a large, diverse set of backgrounds in different positions, orientations, and mobile-specific augmentations. This allows you to generate much larger datasets that can be used for training machine learning models.
Below are some core concepts to help you get started with the Dataset Generator.
Seed images are the core input to the Dataset Generator. They function like stickers that will be pasted onto different backgrounds to generate large amounts of data for model training. Each seed image must be a transparent PNG, where the background has been masked out, leaving only the desired object visible.
For information on creating seed images, see here.
Seed Image Collection
Seed Image Collections are comprised of multiple seed images, a keypoint configuration, and annotations. A keypoint configuration is the set of points that can be annotated on each image. For example, a collection of hands might have five keypoints, one for each finger (thumb, index, middle, ring, and pinky). Annotations themselves are added to each seed image individually, marking the locations where you would want a model to predict a keypoint. These annotations are used to generate ground-truth data used during model training.
Seed images are mutable and may be added to and re-annotated at any time.
Want to get started? Sign-up for Early Access.