IMAGE MODERATION CASE STUDY

Image Moderation

The challenge

Users upload 1.8 billion photos to social media sites and communities every day. This massive amount of user-generated content (UGC) poses a formidable challenge to those mandated with protecting their brands’ web and mobile assets from inappropriate material.

Currently, the most common method for analyzing and detecting offensive content it to use human moderators. The problem: This method is not efficient and, worse, has been proven to have an adverse psychological impact on those who work in this field.

The big question

Can deep learning models create a UGC moderation method that’s superior to human moderation?

The answer

Yes, and here’s how we proved it.

First, Sentient Labs worked with leading businesses and content moderation service providers to establish clear benchmarks. Then, based on those requirements, Sentient Labs trained its Deep Learning AI platform to moderate and detect inappropriate content among massive data sets representing hundreds of thousands of images.

The results: .1 percent rate detection for false negatives and a 14 percent rate detection for false positives, both marks far exceeding established goals. In addition, Sentient Labs’ AI platform was fast, analyzing roughly 150,000 images per minute while still delivering the performance capable of protecting brands—and their employees.

For more information, read the white paper

“It’s always exciting to introduce social features, but you have to be prepared to launch them. A few bad apples will ruin a full bushel.”

FORMER GENERAL MANAGER, MICROSOFT XBOX

2015-10-16T01:10:02+00:00

FORMER GENERAL MANAGER, MICROSOFT XBOX

“It’s always exciting to introduce social features, but you have to be prepared to launch them. A few bad apples will ruin a full bushel.”