Facebook is ramping up its artificial intelligence (AI) tool that detects hate speech, saying technology is now playing an increasingly important role in content moderation.
The latest data from the social media company shows AI helped it detect 95% of the hate speech it removed between April and June.
It says improvements to its technology enabled Facebook to take action on more content in some areas, and increase its proactive detection rate in others.
“Our proactive detection rate for hate speech on Facebook increased six points from 89% to 95%. In turn, the amount of content we took action on increased from 9.6 million in Q1 to 22.5 million in Q2,” says Guy Rosen, Facebook VP of integrity.
Writing on the Facebook blog, Jeff King, director of product management and integrity, and Kate Gotimer, director of global operations, say the biggest change at Facebook has been the role of technology in content moderation.
“As our Community Standards Enforcement report shows, our technology to detect violating content is improving and playing a larger role in content review.”
They say AI has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, “often with greater accuracy than reports from users”.
This helps it detect harmful content and prevent it from being seen by hundreds or thousands of people.
Additionally, King and Gotimer say AI has helped scale the work of content reviewers.
“Our AI systems automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy, so that our reviewers can focus on decisions where more expertise is needed to understand the context and nuances of a particular situation,” reads the blog.
“Automation also makes it easier to take action on identical reports, so our teams don’t have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19 pandemic with a largely remote content review workforce.”
Furthermore, the pair say instead of simply looking at reported content in chronological order, Facebook’s AI prioritises the most critical content to be reviewed, whether it was reported or detected by proactive systems.
“This ranking system prioritises the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation. In an instance where our systems are near-certain that content is breaking our rules, it may remove it. Where there is less certainty, it will prioritise the content for teams to review,” the blog reads.
Facebook says it will continue working to make its platform as safe as it can by combining the strengths of people and technology to find and remove violating content faster.
“Moving forward, we’re going to use our automated systems first to review more content across all types of violations. This means our systems will proactively detect and remove more content when there’s an extremely high likelihood of violation and we’ll be able to better prioritise the most impactful work for our review teams.”
The social media company adds that terrorism content is another area where it saw improvements due to its technology.
“On Facebook, the amount of content we took action on increased from 6.3 million in Q1 to 8.7 million in Q2. And thanks to both improvements in our technology and the return of some content reviewers, we saw increases in the amount of content we took action on connected to organised hate on Instagram and bullying and harassment on both Facebook and Instagram,” says Rosen.
On Instagram, Facebook says, the proactive detection rate for hate speech increased 39 points from 45% to 84% and “the amount of content we took action on increased from 808 900 in Q1 2020 to 3.3 million in Q2”.
Share