• Media and entertainment

    Keeping children safe using artificial intelligence and content moderation

    How an international media and entertainment company uses AI to keep children safe on social media.

  • Background

    An interactive media and entertainment company based in Europe came to Clarifai to seek help with content moderation. The company offers a social media site for pre-teen children that is immensely popular among children throughout Europe—with as much as 20% market penetration in some countries.

    The company’s users interact with their friends in a variety of ways; including posting updates on their wall, sending messages to friends and groups, and inviting other users into their personally decorated ‘rooms’. The platform is designed to offer a playful, safe environment, where children can develop valuable social media competences that prepare them for interacting on other sites as they get older.

  • Information

    Use Case

    Content Moderation

    Industry

    Media & Entertainment

    Client

    Social Media Company

Details

Challenge

Although most of the children ‘play nice’, some try to push their boundaries by posting inappropriate images. Given the young age of the users, it is crucial that photographs containing adult, harmful, scary stuff or spam are excluded from the community.

The company already had a number of successful safety measures in place, enforced by their team of Community Managers. However, policing a platform that hosts such a large volume of user-generated content is no simple feat. The company’s platform is used to send 1 billion messages and post 30 million images per year. With safety as their priority, the team started exploring AI technology to support their visual moderation needs.


momio-doll-pictures-on-mobile-phone

 

Solution

Although most of the children ‘play nice’, some try to push their boundaries by posting inappropriate images. Given the young age of the users, it is crucial that the community content be moderated and photographs containing adult, harmful, scary stuff or spam are excluded from the community.

The company already had a number of successful safety measures in place, enforced by their team of community managers. However, policing a platform that hosts such a large volume of user-generated content is no simple feat. The company’s platform is used to send 1 billion messages and post 30 million images per year. With safety as their priority, the team started exploring AI technology to support their visual moderation needs.

Results

Clarifai’s deep learning AI platform and its computer vision capabilities helped protect young users from exposure to damaging visual content and the company’s community management team were then enabled to spend their time focussing on higher value tasks.

$46,000

Savings in human moderation costs

273x

Increase in the volume of images moderated each day

100x

Improvement in productivity using automated tagging

  • Since diving into the world of AI with Clarifai a year ago, the company has become an even safer social media site. For businesses that need a solution for moderating a large volume of user-generated visual content, I would definitely recommend talking to them because their image moderation technology works so well.”

    Chief Executive Officer

  • Ready to get started?

    Whether you're a start-up or a Fortune 500, contact us to discuss how to enrich your data with AI assisted labeling to annotate datasets accurately, quickly and affordably.

    Learn more about Clarifai

    Schedule a demo

    Discuss your solutions options

  • Already a customer?

    If you are encountering a technical or payment issue, the customer support team will be happy to assist you.


    Contact support >