当前位置: 当前位置:首页 >探索 >【】正文

【】

作者:知識 来源:熱點 浏览: 【】 发布时间:2024-11-21 20:48:14 评论数:

A report says hate speech is declining on Facebook. The problem? The report is from Facebook. And activists say it’s missing valuable context, data, and transparency.

"This report fails to answer simple questions we have been asking for years: How much hate speech is there on Facebook? How many users are exposed to it?" Imran Ahmed, the CEO of the Center for Countering Digital Hate, told Mashable. "When we tested their moderation systems in our study Failure to Protect, we found Facebook failed to act on 89 [percent] of anti-Semitic content reported to them and is still hosting groups with tens of thousands of members that are dedicated to anti-Jewish hatred."

Facebook's Community Standards Enforcement Report claims that hate speech currently represents just five posts in 10,000, down from between five and six in the first three months of this year. Facebook also says it removed 90 percent of hate speech before a user had the chance to report it.

Guy Rosen, vice president of integrity at Facebook, told the press on Wednesday that the company has increased the pace of hate speech removal "very significantly" thanks to artificial intelligence.


Prime Day deals you can shop right now

Products available for purchase here through affiliate links are selected by our merchandising team. If you buy something through links on our site, Mashable may earn an affiliate commission.
  • Shark RV2310 Matrix Vacuum With Self-Cleaning Brushroll—$179.99(List Price $299.99)

  • Samsung Galaxy Tab A9+ 10.9" 64GB Wi-Fi Tablet—$142.49(List Price $219.99)

  • Apple AirPods With Wired Charging Case (2nd Gen)—$69.00(List Price $129.00)

  • Fitbit Charge 6 Fitness Tracker With 6-Months Membership—$99.95(List Price $159.95)

  • Apple Watch Series 9 (GPS, 41mm, Midnight, S/M, Sports Band)—$279.99(List Price $399.00)


"Today, the vast majority of what we remove is detected by our systems before people have even reported them."

While the data might look positive, it’s hard for experts to tell just what they're looking at. As Dave Sifry, the ADL’s vice president of the Center on Technology and Society, told Mashable over email, "we have no idea where this data came from, what it actually measures, or what the margin of error is because the numbers have not been subject to any sort of external review by experts who have access to the data and the methodologies used to calculate these numbers."

"Facebook has shown again and again that it will put out numbers and 'data' that are unaudited, unverified, and unaccountable."

"Facebook has shown again and again that it will put out numbers and 'data' that are unaudited, unverified, and unaccountable. Even worse than that, Facebook actively works against independent researchers who are investigating its platforms," Sifry said in reference to Facebook blocking a team of NYU researchers from studying political ads and COVID-19 misinformationjust weeks ago. "Until Facebook commits to transparency, any numbers it releases are at best circumspect, and at worst dangerously underestimate the problem."

The platform defines hate speech as "violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics or slurs." While the report shows that this kind of hate speech is decreasing, it doesn’t delve into what kinds of hate speech are still thriving.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

"[This data] tells us nothing about the types of hateful content circulating on Facebook and Instagram, how much hate speech is sent to users directly or how many hateful posts were promoted by each platform’s algorithm — something proven to happen as demonstrated by our Malgorithm study of Instagram recommendations," Ahmed said.

Sifry echoed Ahmed’s statement, adding that it would be helpful to know how often moderation schemes were paired as a combined algorithmic and human review; how often posts were reviewed by a human and reversed; the number of ads that appeared next to hate speech, and more.

Activist groups say this report is frustrating, at best.

"Facebook’s attempt to pass off its Community Standards Enforcement report as a form of transparency serves as further evidence of the company’s inability to adequately self-regulate or apply effective oversight of content moderation," Jade Magnus Ogunnaike, senior director of media, culture, and economic justice at Color Of Change, told Mashable over email.

Magnus Ogunnaike noted that hate speech doesn't just affect the platform — it can lead to hate in real life. She pointed to Facebook ignoring requests to remove the hate groupthat organized a violent response in Kenosha, Wisconsin, which resulted in an armed white supremacist killing two protesters.

"Facebook’s decision to limit researchers’ and civil rights groups’ real-time access to data further underscores the company’s desire to evade a thorough assessment of the real-world harm their content moderation decisions make on Black and brown communities," Magnus Ogunnaike said.

This comes just weeks after a new report from the Center to Counter Digital Hatefound that five major social media companies, including Facebook, took no action to remove 84 percent of anti-Semitic posts.

"This report shows how social media companies fail to act on anti-Jewish hate on their platforms," the report read. "As a result of their failure to enforce their own rules, social media platforms like Facebook have become safe places to spread racism and propaganda against Jews."

In October, Facebook announced it would "prohibit any content that denies or distorts the Holocaust," a shift in its hate speech policy on Holocaust denial.

"We know that hateful and violent movements that organize on Facebook have had a real impact in the real world, from racist attacks around the world to the promotion of violent misogyny," Ahmed said. "Independent evidence shows that Facebook is doing too little to tackle hate. The fact that its latest report fails to answer the simplest questions about hateful content shows it is not serious about addressing this problem."

Related Video: How to know if you violated the First Amendment

TopicsFacebook