COVID-19 shows tech companies can curb toxic information. They should do more.

CPJ
4 min readApr 23, 2020

By Robert Mahoney, Deputy Executive Director, Committee to Protect Journalists

Big tech helped create the fever swamp of lies and rumors we have to wade through every day to get our news. The coronavirus pandemic is a chance for them to start draining it.

Facebook, Google and Twitter among others grasped early on, in what the World Health Organization has dubbed the ‘infodemic’ of misinformation accompanying COVID-19, that they needed to be seen as actively promoting accurate science and public health information about the disease.

Their stock with the public and lawmakers in the United States and Europe was already low after a series of scandals including Cambridge Analytica and reported Russian interference in elections.

They acted quickly, creating coronavirus information pages, taking down false information about the pandemic such as conspiracy theories, quack remedies and other content that could endanger people’s health. They refused ads by sellers profiteering from shortages.

In a rare move, Facebook, Twitter, Google/YouTube, Microsoft/LinkedIn and Reddit issued a joint statement saying: “We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world.”

Whether out of enlightened self-interest or civic duty, the companies stepped up ‘content moderation’ — the enforcement of rules, or what Facebook calls ‘community standards,’ about who can post what information. But their actions undercut two of the arguments that social media companies have long used to justify their failure to take full responsibility for regulating content on their platforms — that it was technically difficult and that it was incompatible with their commitments to free speech.

Twitter made clear that when the free speech of a sitting president collides with the threat of misinformation the company is quite capable of taking appropriate action. On March 29 it deleted two tweets by Brazilian President Jair Bolsonaro. The tweets linked to a video that broke the company’s rules on publishing content that “goes directly against guidance from authoritative sources of global and local public health information.” Twitter had earlier taken down a tweet by Venezuelan leader Nicolas Maduro touting some bogus coronavirus ‘antidote.’

Facebook too pulled the video of Bolsonaro, which endorsed the antimalarial drug hydroxychloroquine as a coronavirus treatment.

“We remove misinformation that could lead to physical imminent harm regardless of who posts it,” a Facebook spokesperson told me.

So, I asked all the platforms whether in theory they would remove a post by President Donald Trump that touted, for example, an unproven remedy for the virus that could lead to harm. I received no answer.

Why do public figures matter? They may not be responsible for most of the pollution of our information environment. But research by the Reuters Institute for the study of Journalism at Oxford University shows people pay attention to posts by politicians and celebrities.

The institute analyzed 225 pieces of misinformation about COVID-19 published in English between January and March and concluded: “…misinformation from politicians, celebrities, and other prominent public figures made up just 20% of the claims in our sample but accounted for 69% of total social media engagement.”

This influence is the key to addressing the technical difficulty of monitoring content.

It’s true that human content moderation at scale and in time to mitigate harm is impossible. Billions of pieces of content are uploaded globally each day. Already, the companies use tens of thousands of contracted fact-checkers to take down illegal posts such as terrorist propaganda, child pornography and copyright infringements. And still, they can’t keep up: Each mass shooting or live streamed terrorist attack brings calls from governments for even more and faster content takedowns and account blocking. The investigative site Bellingcat has shown how would-be poisoners of our information well can skirt company moderation and bans.

Journalists are left trying to counter the lies, half-truths and smears that the platforms enable by holding the most egregious up to the light. Initiatives like the International Fact-checking Network have sprung up. News organizations have partnered with platforms to check posts and verify sources.

But if the tech companies focused on the accuracy of the most influential accounts on their platforms, the challenge of policing the rest of the noise would be less significant.

The question is one of motivation. Engagement is core to the social media business model that grabs our personal data and attention to sell advertising.

Facebook chief Mark Zuckerberg has said he will not ‘police’ political speech although he does allow paid political advertising. Twitter too has a special policy for world leaders but does not run political ads.

Big tech has given politicians a huge megaphone. Ripping it from their hands is not the solution. Content takedowns and account blocking are blunt instruments. Lots of reporters and documenters of human rights abuses have seen their work disappear that way.

But companies need to hold elected public officials to a high standard and be transparent about it. The public, from whom these quasi-monopolies make their huge profits, need to see that platforms are consistently implementing moderation policies grounded in human rights.

Instead, our information and online behavior are fed into constantly evolving algorithms which prioritize what we see and how we see it. And moderation by artificial intelligence is fraught with its own problems, as companies have acknowledged during this crisis. As they replaced fact-checkers sent home or out sick due to the pandemic with machine moderation, companies warned users to be prepared for ‘mistakes.’

All these problems existed before COVID-19. It’s just that the greatest public health crisis in a century meeting the most powerful, largely unregulated communication platforms in human history have put them at the center of our lives. Locked down in our homes, watching friends and family fall sick and even die, we are ever more reliant on big tech for communication, information and entertainment.

Algorithms are a black box. If tech platforms say they are adjusting them to promote accurate information and clean up the pollution they’ve caused we need to see it. We can’t take it on trust. Our health depends on it.

--

--

CPJ

The Committee to Protect Journalists is an independent, nonprofit organization that promotes press freedom worldwide. Contact: info@cpj.org