Facebook created a rule book for moderators to use when censoring the posts of its nearly 2bn users, responding to global criticism for failing to prevent the circulation of images of violence, sex, hate speech and other controversial material, The Guardian reported.
Facebook relies on thousands of human moderators to review potentially offensive posts, including videos of death, violence, sexual material, abuse and threatening speech.
The Guardian said it obtained copies of thousands of slides and pictures that Facebook shared with moderators last year as guidelines, and that many moderators feel overwhelmed by the volume of posts that need to be reviewed and confused by apparent contradictions in Facebook’s policies.
The moderators have about 10 seconds to decide on whether to remove material from the site, according to The Guardian.
According to The Guardian, Facebook’s policies include:
- Videos of violent death may be allowed if used to create awareness for issues like mental health;
- Images of child abuse are removed if it’s shared with “sadism and celebration”, otherwise it can remain on the site and be marked as “disturbing”;
- Animal abuse is allowed, but may need to be classified as “disturbing”;
- Violent threats against political figures like US President Donald Trump or those in religious groups are to be removed, but less specific language, such as “let’s beat up fat kids” or “kick a person with red hair”, can remain on Facebook because it’s not considered credible.
Facebook told The Guardian that it’s difficult to reach a consensus for a service with nearly 2bn users. People have different views on what’s appropriate to share, Facebook said.
Earlier this month, Facebook said it was hiring an additional 3 000 people to monitor images on the site. That came after the company faced criticism when a murder and suicide were broadcast on the social network. — (c) 2017 Bloomberg LP