Auto annotation is incredibly important to developers and users of AI. The workflows that you can build with auto annotation save you time and effort when labeling data. Auto annotation helps your business analyze, improve and automate like never before.
What are annotations (aka labels) for anyway?
Annotations are extra information added to your data. There are a few important ways that businesses use this information:
- Train - Make your model smarter. Annotated (labeled) datasets can be fed into your model and used as training data.
- Analyze - Make the objects and events in your visual data easily quantifiable. Count, observe and predict trends.
- Filter - Reduce the noise in your dataset by filtering out irrelevant content.
- Classify - Group your data into useful categories, then search and sort your data based on these categories.
With auto annotation, you can build a workflow that adds annotations to your data automatically.
Control which inputs get annotated
Classification models analyze your inputs and make predictions about the concepts that your model has been trained to recognize. Your model returns a list of concepts, plus a confidence score from 0-1 that tells you how confident the model is that a given concept is present.
These confidence scores can now be used to help manage the data flowing through your app. By using a Concept Thresholder model you can filter and route your workflow data by using comparison operators (>, <, >=, <=) as gatekeepers.
As an example, take a look at some images that returned the concept "backpack" and the corresponding confidence scores.
If we set up a Concept Thresholder model to output concepts with confidence scores greater than 0.8, only the green backpack will pass this test.
A second Concept Thresholder can be set up to output concepts with confidence scores greater than 0.2. This will send the image of the duffel bag as an output, and the red purse will be ignored.
Automatically label your data
Not all predictions are made equal. There will be times when you will trust your model to make decisions on new data with little supervision, and there will be times when you want to review this data closely.
In the final step, your annotation is automatically written to your dataset with an Annotation Writer. By routing your data with the Concept Thresholder from the previous step, you can send predictions with "greater than 0.8" confidence directly to an Annotation Writer and record the annotation as a "Success". You can search and train with this annotation immediately.
The duffle bag image can be sent to a different kind of Annotation Writer. This Annotation Writer uses the special "Pending Review" status so that a human can approve or reject the annotation. This "human in the loop" approach helps you to annotate quickly and efficiently while ensuring quality standards.
Auto annotation can play an important roll in integrating AI within your business operations. Visit our documentation for more detailed walkthroughs of how to set up auto annotation via Portal or our API.