Last month saw New York City hosting the 2019 O’Reilly AI Conference. Attended by professionals across the industry, and beyond, experts, like our CEO and founder, Matt Zeiler, shared their knowledge on how AI is changing the business landscape. Matt’s talk, “Closing the loop on AI: How to maintain quality long-term AI results,” focused on the problem of AI systems decreasing in accuracy over time.
While many developers train their AI models on large sets of labeled data and concepts, after a while, adding new, unlabeled inputs to these models can see their performances decline. Luckily, there is a way to limit this issue and keep your AI results accurate: feedback loops. Here at Clarifai, “closing the loop” ensures our models maintain their performance and improve, even when we add new data.
Below I'll define this process, look at how it works, and discuss why it is so valuable.
What is an AI feedback loop?
A feedback loop refers to the process by which an AI model’s predicted outputs are reused to train new versions of the model.
Okay, how does it work?
When we train a computer vision model, we must first feed it a few labeled samples, showing positive and negative examples of the concepts we want it to learn. Afterward, we can then test the model using unlabeled data. By using deep learning and neural networks, the model can then make predictions on whether the desired concept/s is in these unlabeled images. Lastly, each image is given a probability score, with higher scores meaning a higher level of confidence in its predictions.
Where a model gives an image a high probability score, it is auto-labeled with the predicted concept. However, in some cases, like for Clarifai’s enterprise customers, if the model returns a low probability score, this input is sent to a human moderator who verifies, and if necessary corrects, the result.
The feedback loop occurs when this labeled data, auto-labeled or human-verified, is fed back to the model as training data.
Let’s look at an analogy:
Concept: A preschool class of children is learning to count.
Training: The teacher shows the students a certain amount of fingers and tells them the corresponding number. So for the number “one” or “1,” they hold up one finger. For the number “two” or “2,” they hold up two digits and so on.
Testing: The teacher gives the children a worksheet with several unlabeled images of hands holding up a certain number of fingers. The children are tasked with labeling each image with the correct figure.
Auto-labeling (with human verification):
While students recognize the number of fingers shown in some images, they are less sure of others. So, when the worksheets are collected, the teacher will mark the right answers and correct the wrong ones.
Feedback loop: The teacher returns the corrected worksheets to each student for them to review for later lessons.
How do feedback loops help maintain quality AI results in the long-term?
Machine learning techniques, like deep learning, allow computer vision models to take labeled training data and learn to recognize those concepts in subsequent images. While it is crucial to give your model fresh test data, by feeding the model data it has already predicted over, we are reinforcing its training.
Think about it. When a teacher grades a test paper and returns it with check marks for right answers and corrections for the wrong ones, the student can actually see where they went right or wrong. This, in turn, helps to drive home the lesson, so the student can do better on their next quiz.
As said before, models use neural networks that seek to mimic the human brain. With a feedback loop, you are giving your model the chance to go over what it already knows so it can keep learning from this data and perform better in the future, much like a studying student.
Feedback loops ensure that AI results do not stagnate. This also has a significant advantage in that this data used to train new versions of the model is of the same real-world distribution that the customer cares about predicting over. Without them, AI will choose the path of least resistance, even when that path is wrong, causing its performance to deteriorate. By incorporating a feed loop, you can reinforce your models' training and keep them improving over time.