Subscribe
About
  • Home
  • /
  • Internet
  • /
  • Shocking details as Facebook releases transparency report

Shocking details as Facebook releases transparency report

Samuel Mungadze
By Samuel Mungadze, Africa editor
Johannesburg, 14 Nov 2019

Social media titan Facebook’s transparency report for the first half of 2019 discloses it shut down 5.4 billion fake accounts on its main platform.

The company also removed close to 835 000 posts on Instagram containing images of nude and sexually-exploited children.

And for the first time, Facebook has given the world a glimpse of data on suicide and self-injury material that it removed from its platforms.

The US-based social media company says it took action on about two million pieces of content in second quarter of 2019, of which 96.1% was detected proactively, and it saw further progress in third quarter, removing 2.5 million pieces of content, of which 97.3% was detected proactively.

The company has been publishing a transparency report each half since 2013.

The latest report also states government requests for user data increased by 16% from 110 634 to 128 617.

Of the total volume requests for user data, the US (50 741) continues to submit the largest number of requests, followed by India (4 144), the UK (2 337), Germany (2 068) and France (1 598). South Africa had 14 requests.

Guy Rosen, VP for integrity at Facebook, says: “In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services.

“For the policy areas addressing the most severe safety concerns – child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda – the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it.”

Fakebook?

CEO Mark Zuckerberg says the high number of fake Facebook accounts should be seen in the context of the good work the company is doing to flag the accounts.

During a call with global media yesterday, Zuckerberg said: “Because our numbers are high doesn't mean there's that much more harmful content. It just means we're working harder to identify this content and that's why it's higher."

Facebook says it strives to be open and proactive in the way it safeguard users’ privacy, security and access to information online.

Rosen comments: “For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods – specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda.

“For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.”

On Facebook, in the third quarter, the company removed about 11.6 million pieces of content, up from the first quarter when it removed about 5.8 million.

“Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy. While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters,” Rosen says.

Cutting the hate

The social media company said over the last two years, it invested in proactive detection of hate speech so that harmful content is removed before people report it.

Rosen explains: “Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

“Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot.

“Starting in second quarter of 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy.”

Facebook says it does this in select instances, and it has only been possible because its automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks.

“In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy,” Rosen says.

Share