Twitter has announced it will warn users when a tweet contains disputed or misleading information about the coronavirus.
The new rule is the latest in a wave of stricter policies technology companies are rolling out to confront an outbreak of virus-related misinformation on their sites.
Twitter will take a case-by-case approach on deciding which tweets are labelled and will only remove posts that are harmful, company leaders said. Some tweets will run with a label underneath that directs users to a link with additional information about Covid-19.
Others might be covered entirely by a warning label alerting users that “some or all of the content shared in this tweet conflict with guidance from public health experts regarding Covid-19″. The labels will be available in roughly 40 languages, and the warning could apply to past tweets.
Twitter will not directly fact-check or call tweets false on the site, said Nick Pickles, the company’s global senior strategist for public policy. The warning labels might direct users to curated tweets, public health websites or news articles.
“People don’t want us to play the role of deciding for them what’s true and what’s not true but they do want people to play a much stronger role providing context,” Pickles said.
The fine line is similar to one taken by Facebook, which has said it does not want to be an “arbiter of the truth” but has arranged for third-party fact checkers to review falsehoods on its site. One example of a disputed tweet that might be labelled includes claims about the origin of Covid-19, which remains unknown. Conspiracy theories about how the virus started and if it is man-made have swirled around social media for months.
Twitter will continue to take down Covid-19 tweets that pose a threat to the safety of a person or group, along with attempts to incite mass violence or widespread civil unrest. The company has already been removing bogus coronavirus cures and claims that social distancing or face masks do not curb the virus’s spread.