More than 50 million pieces of content were given warning labels on Facebook for being misleading in relation to coronavirus, the social network has revealed.
Publishing its latest Community Standards Enforcement Report, Facebook said that since 1 March, it has removed more than 2.5 million pieces of content linked to sale of medical items such as masks and Covid-19 test kits.
The social network also revealed its Covid-19 Information Centre, which shows health and virus information from official sources, had now directed more than two billion people to resources from health authorities.
Social media platforms, including Facebook, have been repeatedly criticised over the amount of disinformation and harmful content linked to the Covid-19 outbreak which has spread online.
A number of charities and online safety organisations have warned that with people – particularly children – spending more time online during lockdown, more stringent monitoring of online platforms is needed.
In its Enforcement Report, Facebook said its detection technology was now finding around 90% of the content the platform removes before it is reported to the site.
The millions of pieces of content given warning labels was based on around 7 500 articles which had been analysed by fact-checking groups working with the site.
Doubled
The social network said it had doubled the amount of drug-related content it had removed compared to its previous report, taking down more than 8.8 million pieces of violating content.
On Instagram, which is also owned by Facebook, the amount of suicide and self-injury content it took action against increased by 40%, the company said.
As part of the announcements, Instagram confirmed several new features designed to combat bullying and unwanted contact on the platform.
The app confirmed users will be able to choose who can tag and mention them in posts and delete negative comments in bulk, as well as block or restrict multiple accounts that post such comments at once.
A feature will also enable users to pin a select number of positive comments to the top of the comments section of a post.
Alongside its report, Facebook said it was improving some of the technology it used with violating content detection, including how it detects images which contain Covid-19 disinformation and are near-exact duplicates of previously identified images.
The company said it had also developed systems to stop the sale of products linked the virus.
Writing in a blog post, Facebook chief technology officer Mike Schroepfer said the firm’s reliance on technology to review content had increased during the coronavirus pandemic.
“We’ve spent the last few years building tools, teams and technologies to help protect elections from interference, prevent misinformation from spreading on our apps and keep people safe from harmful content,” he said.
“So, when the Covid-19 crisis emerged we had the tools and processes in place to move quickly and we were able to continue finding and removing content that violates our policies.
“When we temporarily sent our content reviewers home due to the Covid-19 pandemic, we increased our reliance on these automated systems and prioritised high-severity content for our teams to review in order to continue to keep our apps safe during this time.”