IMAGE MODERATION CASE STUDY
Users upload 1.8 billion photos to social media sites and communities every day. This massive amount of user-generated content (UGC) poses a formidable challenge to those mandated with protecting their brands’ web and mobile assets from inappropriate material.
Currently, the most common method for analyzing and detecting offensive content it to use human moderators. The problem: This method is not efficient and, worse, has been proven to have an adverse psychological impact on those who work in this field.
The Big Question
Can deep learning models create a UGC moderation method that’s superior to human moderation?
Yes, and here’s how we proved it.
First, Sentient Labs worked with leading businesses and content moderation service providers to establish clear benchmarks. Then, based on those requirements, Sentient Labs trained its Deep Learning AI platform to moderate and detect inappropriate content among massive data sets representing hundreds of thousands of images.
The results: .1 percent rate detection for false negatives and a 14 percent rate detection for false positives, both marks far exceeding established goals. In addition, Sentient Labs’ AI platform was fast, analyzing roughly 150,000 images per minute while still delivering the performance capable of protecting brands—and their employees.
FORMER GENERAL MANAGER, MICROSOFT XBOX