Advertisement
Canada markets closed
  • S&P/TSX

    21,969.24
    +83.86 (+0.38%)
     
  • S&P 500

    5,099.96
    +51.54 (+1.02%)
     
  • DOW

    38,239.66
    +153.86 (+0.40%)
     
  • CAD/USD

    0.7316
    -0.0007 (-0.09%)
     
  • CRUDE OIL

    83.66
    +0.09 (+0.11%)
     
  • Bitcoin CAD

    85,788.98
    -2,230.03 (-2.53%)
     
  • CMC Crypto 200

    1,305.99
    -90.54 (-6.49%)
     
  • GOLD FUTURES

    2,349.60
    +7.10 (+0.30%)
     
  • RUSSELL 2000

    2,002.00
    +20.88 (+1.05%)
     
  • 10-Yr Bond

    4.6690
    -0.0370 (-0.79%)
     
  • NASDAQ

    15,927.90
    +316.14 (+2.03%)
     
  • VOLATILITY

    15.03
    -0.34 (-2.21%)
     
  • FTSE

    8,139.83
    +60.97 (+0.75%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • CAD/EUR

    0.6838
    +0.0017 (+0.25%)
     

Facebook says it’s stopping hate and violence against Black Americans. Its own research shows otherwise.

In June 2020, a company researcher began looking into the rise of hate and violent speech on Facebook after George Floyd died on the South Minneapolis pavement under the knee of a white police officer.

As protests spread, so did reports from users flagging dangerous and offensive content on Facebook. But it was only when then-President Donald Trump warned on Facebook “When the looting starts, the shooting starts” that the floodgates opened, the researcher found.

Facebook saw a “drastic” fivefold and threefold surge in user reports for violence and hate speech respectively. By June 2, the entire country “was basically ‘on fire,’” the report found.

The pattern crisscrossing the country didn’t necessarily mean that social media posts were causing violent outbreaks, but researchers questioned if Facebook was doing enough to limit the risk of harm.

ADVERTISEMENT

►Live updates: More on the Facebook papers and the whistleblower testimony in the UK

►The story of Carol and Karen: Two experimental Facebook accounts show how the company helped divide America

Trump's post echoed a phrase from a Miami police chief in the 1960s about cracking down on Black neighborhoods during periods of civil unrest. It was viewed “orders of magnitude times more” than the total number of views of hate speech Facebook prevents in a single day, an employee said in an exit memo.

That person accused the company of “propping up actors who are fanning the flames of the very fire we are trying to put out.”

Even as civil rights leaders and the Black community registered complaint after complaint about Facebook, internal documents reviewed by USA TODAY show that the company continued to combat a relentless wave of racially motivated hate speech with automated moderation tools that are not sophisticated enough to catch most harmful content and are prone to making mistakes.

One Facebook employee estimated that 1 out of every 1,000 pieces of content on the platform are hate speech. With all of the company's enforcement efforts combined, less than 5% of all the hate speech posted to Facebook is deleted, the person said.

The internal documents are among hundreds disclosed to the Securities and Exchange Commission and provided to Congress in redacted form by attorneys for Frances Haugen, a former Facebook product manager turned whistleblower. The redacted versions of the Facebook Papers were obtained by a consortium of 17 news organizations, including USA TODAY, following a series of extensive reports in The Wall Street Journal.

'They have been deceiving the entire world'

Civil rights leaders say the Facebook documents confirm their worst suspicions. For years, Facebook executives have repeatedly made promises but little progress in protecting the Black community and other often-targeted groups from hate speech and threats that can lead to violence.

Their grievances only intensified with the flood of hateful content on Facebook’s platforms after Floyd’s death. Last summer, civil rights groups joined with major advertisers to lead a boycott of the company.

“They deny responsibility. They deflect responsibility. They delay taking action,” Imran Ahmed, CEO of the Center for Countering Digital Hate, told USA TODAY. “Now it’s clear they have been deceiving the entire world.”

Benjamin Jackson III, 10, walks past a mural depicting George Floyd in the Watts neighborhood of Los Angeles in 2020.
Benjamin Jackson III, 10, walks past a mural depicting George Floyd in the Watts neighborhood of Los Angeles in 2020.

Research dating back two years shows the pitfalls with Facebook’s strategy to contain hate speech.

Facebook relies on a set of rules called "Community Standards” to guide decisions about what constitutes hate speech. These standards are enforced by human moderators, but mostly Facebook depends on automated tools to screen posts.

Facebook uses artificial intelligence to scan for content that could violate its rules then it either removes the posts or it decreases their visibility on the platform and forwards to human moderators to review.

In 2019, a Facebook researcher estimated that automated moderation tools removed posts that generated just 2% of the views of hate speech that violated Facebook’s rules.

The reasons were many. Facebook’s standards for what qualifies as hate speech were complicated and difficult to apply consistently. The policies also differed from country to country. And the company’s automated tools often missed hate speech or returned false positives.

“Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term,” the researcher wrote.

A more recent Facebook tally in March estimated that automated tools removed posts that generated 3% to 5% of views of hate speech and 0.6% of content that violated Facebook’s rules against violence and incitement, according to another research report.

Facebook says users are seeing less hate speech

Facebook says the percentages cited in internal documents refer to hate speech removed using automated tools and do not include other ways the company limits how much hate speech users see, including pushing harmful content lower in news feeds.

“When combating hate speech on Facebook, our goal is to reduce its prevalence, which is the amount of it that people actually see,” Facebook spokesman Andy Stone told USA TODAY.

That volume has shrunk in the past three quarters, from 10 out of every 10,000 views to about five, he said. Facebook is working with an independent auditor to validate those figures.

Facebook says nearly all of the hate speech it takes down is discovered by its automated moderation tools before it is reported by users. This process, which it calls its proactive detection rate, is almost 98% effective, the company says.

These systems also help Facebook reduce the number of people who see content that likely violates its policies, Stone said.

“This is the most comprehensive effort to remove hate speech of any major consumer technology company and, while we have more work to do, we remain committed to getting this right,” he said.

CEO Mark Zuckerberg defended Facebook's track record on using artificial intelligence to detect hate speech Monday during a quarterly earnings call with analysts.

"This is an important area and there should be scrutiny on it," he said. "But I also think that any honest account of what’s actually going on here should take into account that a huge amount of progress has been made and will continue to be made by a lot of talented people who are working on it."

But a Facebook employee who worked on efforts to reduce violence and incitement during the 2020 presidential election characterized the company’s progress on rooting out hate speech as incremental and “simply dwarfed by the sheer volume of violating content that there is on Facebook.”

“An incremental increase on a very small number is still a very small number,” this person wrote in an exit memo. “The truth is that the problem of inferring the semantic meaning of speech with high precision is not remotely close to solved – just ask Siri. In terms of identifying hate speech, we might just be the very best in the world at it, but the best in the world isn't good enough to find a fraction of it.”

Facebook Whistleblower Frances Haugens' internal documents show that Facebook removes just a fraction of hate speech and threats against Black users.
Facebook Whistleblower Frances Haugens' internal documents show that Facebook removes just a fraction of hate speech and threats against Black users.

Even with some of the world’s smartest people working on the problem, the former employee said, it's doubtful Facebook can succeed with its current strategy.

“I'm highly confident that our current approach of grabbing a hundred thousand pieces of content, paying people to label them as Hate or Not Hate, training a classifier, and using it to automatically delete content at 95% precision is just never going to make much of a dent,” the person said.

Facebook is 'making hate worse,' Haugen says

Part of the problem is Facebook's algorithm, which amplifies content based on how much other Facebook users interact with it.

On Monday Haugen urged British policymakers to rein in Facebook's use of “engagement-based rankings." Its own research shows the ranking system prioritizes divisive and extremist posts.

“Anger and hate is the easiest way to grow on Facebook,” she said in her testimony at Parliament.

“We didn’t invent hate, we didn’t invent ethnic violence. But that’s not the question. What is Facebook doing to amplify or expand hate? What is it doing to amplify or expand ethnic violence,” Haugen said. “Unquestionably it is making hate worse.”

Some employees inside the company blamed the company’s products for the unchecked spread of hate.

Facebook has “compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform,” one research report said. “The net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral.”

Firestorm over Trump's looting and shooting post

Employees also objected to the XCheck program that protects high-profile users like celebrities and politicians even if their posts contain incitements to violence.

An automated system trained to detect whether a Facebook post violates the company’s rules scored Trump’s post 90 out of 100, indicating it was highly likely it did. But XCheck shielded Trump’s account from enforcement, internal Facebook documents show.

Rashad Robinson, president of online racial justice group Color of Change
Rashad Robinson, president of online racial justice group Color of Change

Heavily reshared on Facebook, Trump's post attracted comments like: “Start shooting these thugs” and use “real bullets."

After reviewing the post amid strife inside and outside Facebook, Zuckerberg personally made the call to leave it up.

"Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric," he wrote at the time. "But I'm responsible for reacting not just in my personal capacity but as the leader of an institution committed to free expression."

Rashad Robinson, president of Color of Change, an online racial justice group, says he called Zuckerberg to challenge that decision. Zuckerberg dismissed concerns that Trump’s post would whip up vigilantism against the Black community and insisted the post was staying up to warn the public of the threat of military force, Robinson says.

Facebook Chairman and CEO Mark Zuckerberg arrives to testify before Congress in 2019
Facebook Chairman and CEO Mark Zuckerberg arrives to testify before Congress in 2019

Zuckerberg’s response “solidified our deep understanding that the people at Facebook are perfectly fine about this happening and they could care less about people being harmed,” Robinson told USA TODAY. “They are willing to put us in harm's way as long as they make more money.”

On May 5, Facebook’s Oversight Board upheld Trump’s two-year suspension. Facebook banned Trump indefinitely the day after the Jan. 6 attack on the U.S. Capitol. "We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg wrote at the time.

In its ruling, the tribunal of outside experts set up and funded by Facebook with limited oversight powers said Trump’s comments on the day of the Capitol siege “created an environment where a serious risk of violence was possible.”

It also criticized the company’s enforcement practices and recommended that Facebook examine its rules for users who are part of the XCheck program and develop penalties for violators. Facebook declined to adopt that recommendation.

Last week, the Oversight Board rebuked Facebook for not mentioning XCheck when the company briefed the board on its enforcement policies for politicians during the review of Trump’s suspension. The board has launched a review of the program.

Facebook is failing Black community, civil rights leaders say

Encountering hate speech and violent threats on Facebook is a daily occurrence for Tanya Faison, founder of the Black Lives Matter chapter in Sacramento, California.

In August, the chapter’s Facebook page posted an article about a Proud Boys leader being sentenced to five months for burning a Black Lives Matters banner taken from a Black church in Washington during a pro-Trump demonstration.

“Why do I have the feeling if I spit on a MAGA hat I’d get five years?” one person commented on the post.

Someone replied: “Try it. You might get a bullet instead.”

Faison reported the threat to Facebook but says she received an automated reply that the comment did not violate Facebook’s rules.

“Someone literally told another person they would get a bullet in their head. How is this getting deemed as abiding by standards?” she wrote in an email to Facebook reviewed by USA TODAY. The company later removed the comment.

Tanya Faison, in white shirt, confronts police officers during a Black Lives Matter protest. Faison is founder of the Black Lives Matter Sacramento chapter in California.
Tanya Faison, in white shirt, confronts police officers during a Black Lives Matter protest. Faison is founder of the Black Lives Matter Sacramento chapter in California.

Faison says Facebook’s hate speech policies and content moderation systems fail the people the company claims it's trying to protect.

“Blatant racism and extreme violence continue to go unchecked while left and right folks are being banned on a regular basis for using the term ‘white people’ or expressing frustration with racist experiences,” she told USA TODAY.

Internal Facebook documents show that the company knows African Americans are among the most active communities on the platform. They "over index on all of our core engagement metrics," the research found. African Americans produce and re-share more content, engage more with Facebook stories and “have more meaningful engagement,” one internal document said.

Yet Black voices speaking out against racism are routinely stifled, and Facebook rarely takes action on racial slurs, violent threats and harassment campaigns targeting Black users, Faison says.

“In 2016, Black Lives Matter Sacramento and the BLM Network went to meet with Facebook executives to discuss the issues that Black folks were experiencing on the Facebook platform, with the belief that we were working together to resolve this huge racist problem,” she said. “Throughout the entire meeting we were only given excuses, and no solutions. We were told it would be something they would work on.”

“Here it is five years later and the racism and hate have only gotten worse,” Faison said. “People are constantly being put in ‘Facebook jail’ for literally defending themselves against the very thing that Facebook should be banning folks for: hate.”

Hate speech is costly problem for Facebook

Facebook has been struggling with hate speech for years, and having human moderators screen for violating content was a particular challenge in the United States, internal documents show. Those hate reviews accounted for 37.69% of the world total.

Hate was also Facebook’s most expensive and labor-intensive moderation problem, costing the company $2 million a week, according to an internal report that was making plans for the first half of 2019.

Looking to reel in costs, Facebook cut the number of hours that moderators spent reviewing hate speech, the internal documents show.

Stone, the Facebook spokesman, said that the money was shifted to personnel who trained Facebook’s algorithms to identify hate speech and that the overall budget did not change.

At the same time, Facebook began using an algorithm that ignored those user reports it deemed unlikely to violate its policies.

The process for Facebook users to report hate speech also became more cumbersome, which reduced the number of complaints made, according to the documents.

“We may have moved the needle too far on adding friction,” a researcher in one document said.

Stone said that research helped the company realize it had made reporting hate speech too difficult for Facebook users, so the company reduced the number of steps it requires.

Hate speech is making users miserable, research shows

For users, the flood of hate speech and other objectionable or harmful content is ruining the Facebook experience, according to research the company conducted.

A Facebook research report from 2019 found a major gap between what Facebook says violates its rules and what users find harmful, objectionable or problematic. That gap, and Facebook’s inability or unwillingness to address such content, is leading to more negative experiences for its users, which the company worries will drive them away, the research found.

The study examined 800 pieces of content submitted by 60 participants over two weeks that they believed was harmful, problematic or objectionable.

It found “divisive and depressing content is pervasive” and affects users’ view of the world and Facebook over time.

Three-quarters of the users said they were not happy with the content they see on Facebook. More than one-quarter of participants reported seeing hate speech multiple times a day, and Facebook found it was “more frequent in users’ feeds.”

“Understanding this sort of content requires context that ML (machine learning) cannot provide, and puts us at a disadvantage and encourages users to get more creative with their hate,” the company found.

Comments turning hateful “suggests we should investigate how our downstream comment models might be encouraging hate.”

Contributing: Rachel Axon

This article originally appeared on USA TODAY: Facebook Papers: Company losing war against racist hate and violence