www.bbc.com Open in urlscan Pro
151.101.64.81  Public Scan

URL: https://www.bbc.com/news/technology-52642633
Submission: On July 26 via api from PL — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

LET US KNOW YOU AGREE TO COOKIES

We use cookies to give you the best online experience. Please let us know if you
agree to all of these cookies.

Yes, I agree

No, take me to settings

BBC Homepage
 * Skip to content
 * Accessibility Help

 * Sign in


 * Home
 * News
 * Sport
 * Reel
 * Worklife
 * Travel
 * Future
 * More menu

More menu
Search BBC
 * Home
 * News
 * Sport
 * Reel
 * Worklife
 * Travel
 * Future
 * Culture
 * Music
 * TV
 * Weather
 * Sounds

Close menu
BBC News
Menu
 * Home
 * War in Ukraine
 * Coronavirus
 * Climate
 * Video
 * World
 * UK
 * Business
 * Tech
 * Science
 * Stories

More
 * Entertainment & Arts
 * Health
 * World News TV
 * In Pictures
 * Reality Check
 * Newsbeat
 * Long Reads

 * Tech


FACEBOOK TO PAY $52M TO CONTENT MODERATORS OVER PTSD

Published
13 May 2020

Share
close
Share page
Copy link
About sharing
Image source, Getty Images
Image caption,
Facebook moderators working at its offices in Austin, Texas

Facebook has agreed to pay $52m (£42m) to content moderators as compensation for
mental health issues developed on the job.

The agreement settles a class-action lawsuit brought by the moderators, as first
reported by The Verge.

Facebook said it is using both humans and artificial intelligence (AI) to detect
posts that violate policies.

The social media giant has increased its use of AI to remove harmful content
during the coronavirus lockdown.

In 2018, a group of US moderators hired by third-party companies to review
content sued Facebook for failing to create a safe work environment.

The moderators alleged that reviewing violent and graphic images - sometimes of
rape and suicide - for the social network had led to them developing
post-traumatic stress disorder (PTSD).



The agreement, filed in court in California on Friday, settles that lawsuit. A
judge is expected to sign off on the deal later this year.

 * Facebook and YouTube moderators sign PTSD disclosure
 * Coronavirus: Far-right spreads coronavirus 'infodemic' on Facebook
 * Facebook and Google extend working from home to end of year

The agreement covers moderators who worked in California, Arizona, Texas and
Florida from 2015 until now. Each moderator, both former and current, will
receive a minimum of $1,000, as well as additional funds if they are diagnosed
with PTSD or related conditions. Around 11,250 moderators are eligible for
compensation.

Facebook also agreed to roll out new tools designed to reduce the impact of
viewing the harmful content.

A spokesperson for Facebook said the company was "committed to providing
[moderators] additional support through this settlement and in the future".


MODERATING THE LOCKDOWN

In January, Accenture, a third-party contractor that hires moderators for social
media platforms including Facebook and YouTube, began asking workers to sign a
form acknowledging they understood the job could lead to PTSD.

The agreement comes as Facebook looks for ways to bring more of its human
reviewers back online after the coronavirus lockdown ends.


Image source, NurPhoto
Image caption,
Facebook has increased its use of AI to detect misleading information about the
coronavirus outbreak

The company said many human reviewers were working from home, but some types of
content could not be safely reviewed in that setting. Moderators who have not
been able to review content from home have been paid, but are not working.

To offset the loss of human reviewers, Facebook boosted its use of AI to
moderate the content instead.

In its fifth Community Standards Enforcement Report released on Tuesday, the
social media giant said AI helped to proactively detect 90% of hate speech
content.

AI has also been crucial in detecting harmful posts about the coronavirus.
Facebook said in April that it was able to put warning labels on around 50
million posts that contained misleading information on the pandemic.

However, the technology does still struggle at times to recognise harmful
content in video images. Human moderators can often better detect the nuances or
wordplay in memes or video clips, allowing them to spot harmful content more
easily.

Facebook says it is now developing a neural network called SimSearchNet that can
detect nearly identical copies of images that contain false or misleading
information.



According to the social media giant's chief technology officer Mike Schroepfer,
this will human reviewers to focus on "new instances of misinformation", rather
than looking at "near-identical variations" of images they have already
reviewed.



 * LOCKDOWN UPDATE: What's changing, where?
 * SCHOOLS: When will children be returning?
 * EXERCISE: What are the guidelines on getting out?
 * THE R NUMBER: What it means and why it matters
 * LOOK-UP TOOL: How many cases in your area?





MORE ON THIS STORY

 * Facebook's 'supreme court' members announced
   
   6 May 2020
   
   

 * Social media moderators asked to sign PTSD forms
   
   25 January 2020
   
   





TOP STORIES

 * EU agrees to cut gas use over Russia supply fears
   
   Published
   44 minutes ago

 * Japan executes Akihabara mass murderer
   
   Published
   5 hours ago

 * Live. 
   
   Backers of UK PM rivals condemn 'unpleasant' attacks




FEATURES

 * The Great Salt Lake is running out of water. VideoThe Great Salt Lake is
   running out of water
   
   

 * What did CIA boss say about Putin? Take our timed quiz...
   
   

 * The companies giving up on hybrid
   
   
 * 

 * Kenya election: Where the president no longer backs his deputy
   
   

 * What to expect from Euro 2022 semi-finals
   
   

 * A mind-reading combat jet for the future
   
   

 * The Pope's 'pilgrimage of penance' to Canada
   
   

 * What is monkeypox and how do you catch it?
   
   

 * The men whose fight cost them their lives
   
   




ELSEWHERE ON THE BBC

 * Why we brush our teeth wrong
   
   Most of us don't clean our teeth in the right way
   
   

 * The animation too dark for Hollywood
   
   Why dark Japanese fairy tale Princess Mononoke was too much for Hollywood
   
   

 * Delhi's opulent 'snack of wealth'
   
   Some street vendors say moonlight and dew are the magic ingredients
   
   




MOST READ

 1.  1EU agrees to cut gas use over Russia supply fears
 2.  2Japan executes Akihabara mass murderer
 3.  3Pride jersey sparks player boycott in Australia
 4.  4Heathrow hits back at 'bizarre' Ryanair criticism
 5.  5Freya the 600kg walrus causes a stir in Norway
 6.  6Amazon raises Prime subscription price
 7.  7Convenience store spy cameras face legal challenge
 8.  8Smoke from forest fire drifts across Czech Republic
 9.  9Five takeaways from a heated Truss-Sunak clash
 10. 10A mind-reading combat jet for the future







BBC NEWS SERVICES

 * On your mobile
 * On smart speakers
 * Get news alerts
 * Contact BBC News

 * Home
 * News
 * Sport
 * Reel
 * Worklife
 * Travel
 * Future
 * Culture
 * Music
 * TV
 * Weather
 * Sounds

 * Terms of Use
 * About the BBC
 * Privacy Policy
 * Cookies
 * Accessibility Help
 * Parental Guidance
 * Contact the BBC
 * Get Personalised Newsletters
 * Why you can trust the BBC
 * Advertise with us
 * AdChoices / Do Not Sell My Info

© 2022 BBC. The BBC is not responsible for the content of external sites. Read
about our approach to external linking.