Tech

Facebook estimates malicious expression once in every 1000 views on the platform

For the first time on Thursday, Facebook released figures on the prevalence of hate speech on the platform, stating that for every 10,000 content views in the third quarter, 10-11 included hate speech.

The world’s largest social media companies have been scrutinizing abuse crackdowns, especially before and after the November presidential election, and released quotes in their quarterly content moderation report.

Guy Rosen, Facebook’s head of security and integrity, called reporters over 265,000 from Facebook and Instagram in the United States for violating voter interference policies from March 1st to November 3rd. He said he had deleted the content.

Facebook also said it took action against 22.1 million maliciously expressed content in the third quarter, about 95% of which were positively identified. We took 22.5 million actions in the last quarter.

The company defines “taking action” as deleting content, covering with warnings, disabling accounts, or escalating to an external agency.

Facebook’s photo-sharing site Instagram has taken action on 6.5 million hate speech content, up from 3.2 million in the second quarter. About 95% of this was pre-identified, up 10% from the previous quarter.

This summer, civil rights groups spread Facebook’s advertising boycott to pressure social media companies to oppose hate speech.

In October, Facebook announced that it would update its hate speech policy to ban content that denies or distorts the Holocaust. This is a shift from public comment made by Facebook CEO Mark Zuckerberg on what the platform should allow.

Facebook said it took action on 19.2 million violent and graphic content in the third quarter, up from 15 million in the second quarter. Instagram took action on 4.1 million violent and graphic content, up from 3.1 million in the second quarter.

According to Rosen, the company plans to conduct an independent audit of content enforcement numbers “during 2021.”

Earlier this week, Zuckerberg and Twitter CEO Jack Dorsey were grilled by Congress on their company’s content moderation practices, from Republican political prejudice claims to decisions on violent speech.

Reuters said last week that Zuckerberg had enough company policy to justify the suspension when Zuckerberg urged former Trump White House adviser Steve Bannon to squeeze two U.S. officials He reported that he said he did not violate.

The company has also been criticized in recent months for its fast-growing Facebook group to be able to share false election claims and violent rhetoric to gain momentum.

Facebook has shown the rate of finding rule-violating content before users report increased rule-violating content in most areas due to improvements in artificial intelligence tools and expansion of detection technology to more languages. It was.

Facebook said in a blog post that the COVID-19 pandemic continues to confuse the content review workforce, but that some enforcement indicators have returned to pre-pandemic levels.

An open letter from more than 200 Facebook content moderators released Wednesday accused the company of forcing these workers back into the office and “unnecessarily endangering” their lives during the pandemic. did.

“The facility meets or exceeds the guidance on safe workspaces,” Facebook’s Rosen said in a phone call Thursday.

Facebook estimates malicious expression once in every 1000 views on the platform

Source link Facebook estimates malicious expression once in every 1000 views on the platform

Back to top button