Social Media platforms are moderating Covid-19 content and should learn lessons for elections

Social media platforms have been very quick to act to limit the spread of false information about Covid-19 on their platforms. And they have gone further by prioritising content provided by expert sources within their algorithm.

So if they can do it for Covid-19, why can’t they do it in other areas such as elections? Why do platforms such as Facebook still insist on ‘letting users judge truth for themselves’.

So when it comes to elections and political issues, Facebook hands over control to third-party fact-checking organisations. They have limited powers and even smaller budgets. They can’t for instance, get involved in the the statements made by elected leaders or candidates. So if President Trump were to say something false on the platform then it would have to stand with the public left to decide for themselves whether it is true or not. Facebook claims that political statements often cannot be judged to be 100% true or 100% false and so it would be wrong of them to try to adjudicate.

And yet when it comes to the virus, there are many supposed facts fighting for attention. Issues such as whether you can catch the disease more than once, how much good facemasks do and whether children can catch or spread the disease are key. And there have been scientists on either side of the debate on each of these, with opinion shifting over time. Yet the platforms are promoting one side of the argument over the other in each of these cases based on what officials such as the WHO and CDC are saying. It cannot only be me who sees a double standard when it comes to the way the virus is treated compared with climate change?

Robyn Caplan at the Brookings Institute makes the point that there is a difference between moderation and mediation. She suggests that it is not just a matter of taking down the fake news, but of trying to understand the truth in a complicated scientific world

Most of us would accept that, when it comes to Covid-19, there is a difference in the knowledge held by Chris Whitty on the one hand and Jo Bloggs down the street on the other, and so it is right that the platforms use their algorithms to make sure we are more likely to see the former than the latter. But with election related material they do not. Indeed, the algorithms go out of their way to reinforce prejudice by promoting content from people like us and people we agree with and reducing the likelihood of us seeing any fact check or an opposing point of view. There is little in the way of debate in your Facebook feed.

Just as with electoral manifestos, there are areas of genuine debate about the virus that social media platforms do not, and should not, get involved with. How lockdown should be implemented and then lifted is a political decision. We can debate it online as much as we do in real life. And even though the science is more determined, it is not exactly agreed with by 100% of the scientists. You only have to look at the outcomes from the government’s SAGE group and the unofficial alternative. But the platforms are prepared to wade into this debate because they know it is their public responsibility to do so.

The UK government has delayed its Online Harms Bill again and today the minister refused to deny that it could be 2023 before it is enacted. With that much time available, surely it is right that the government here at least looks at the opportunities to require the platforms to take the lessons they have learned from Covid-19 and apply them to areas such as elections too.