Since inception, Clarifai has been helping our customers moderate their image and video content. In fact, moderation continues to be one of the most popular use cases of our technology to-date. Customers like Photobucket use our NSFW pre-built model to help them moderate out nudity, 9GAG uses our Moderation pre-built model to differentiate between ‘explicit nudity’ from ‘suggestive nudity’ in their content, and Momio also uses our NSFW pre-built model to keep their children’s social media platform safe.
We have worked closely with our customers over the years to understand their end-to-end workflow and have discovered a few friction points that still exist with machine learning-based moderation:
- Understanding Thresholds: Machine learning models provide a ‘confidence value’ for each concept, the model identifies the exact confidence value, or threshold, for which an action can be taken such as auto-accept or auto-reject. This process often requires a lot of testing and iterations in order to be successful.
- Human Review: Machine learning models are continuously learning and improving, which means they aren’t perfect just yet. Customers sometimes require human moderators to review content models aren’t confident about.
- Moderation Interface: Human reviewers often require a web application that displays images in real-time and is optimized for the review workflow. Customers need to build and maintain this web application themselves, which can cost them valuable resources and doesn’t directly add value to their top line.
- Distributed Team Management: Managing an outsourced human moderator team is sometimes advantageous, since the team requires a very specific skill set. However, the majority of the time, managing this team is an overhead, as the skill set required is generic and typically consumes valuable time and resources from the customer.
Today, we are excited to be launching a fully end-to-end Moderation Solution into public beta to help solve these pain points and to provide a solution that augments our existing world-class computer vision technology.
Clarifai’s Moderation Solution offers customers the ability to receive a UI and tools required to manage moderation tasks, along with the option to route images to Clarifai-provided moderators to help screen for NSFW, explicit, violent, and other kinds of unwanted content.
Customers can take advantage of new features for moderation such as:
- Customer Interface: With our end-to-end Moderation Solution, customers can use a new interface to help understand their visual data and set the appropriate thresholding for machine learning models. The ability to group incoming images by “auto approve”, “auto reject”, and “further review” can be set by thresholding directly within the interface.
- Moderator Interface: Images end up in the “further review” range can be viewed in a new moderator interface, where a human moderator can make a decision on whether or not the image move onto a customers’ platform.
- Human Moderation as a Service: Don’t want to hire your own human moderation team? We’ve got you covered! Clarifai can augment your workforce with our own moderation team, so you don’t have to take on the overhead costs or resources.
If you’re interested in learning more or want to get started with our new solution for moderation we’d love to hear from you! Find more information here.