Twitter fact-checks Trump: labels postal voting claims as false

Twitter has taken both a huge and a tiny step in deciding to tag President Trump’s tweets about postal voting in California with a link to a fact-checking page. It is huge because this is the first time that any social media platform has even come close to censoring the President when he makes false statements and because it appears to go against Twitter’s own ‘free speech for politicians’ policy.

Screen Shot 2020-05-27 at 09.29.00

But it is also tiny because it is merely a link to another page, a tag applied many hours after the original tweets. And as the Guardian and others have shown, the link doesn’t appear in some cases if you reproduce the tweet elsewhere.

That Twitter should choose to make this decision for posts about elections is not that surprising. The company has singled out attempts at “manipulating or interfering in elections or other civic processes” for special attention. That said, the platform failed to act when President Trump made false statements claiming that Michigan would be sending a ballot to every voter by mail (they are merely sending a postal vote application – something done by many Republican states). It might be cynical to look at the company taking action when the tweets are about their home state of California as being significant, but there you are.

The tweets in question are a repetition of the sort of thing the President said in the Michigan case – that the state would be sending ballot papers to anyone living in the state, even if they are illegal immigrants (that bit is implied) and that state officials would then tell people how to vote. Each aspect is clearly false. The linked fact checking page is pretty good – it aggregates a range of journalists and others explaining why the President’s statements are not correct. How many of the President’s followers will actually read it remains to be seen however.

Predictably, the President is claiming that this action has infringed his right to free speech, and that he ‘will not allow it to happen’ despite platforms having the right under federal law to decide how to moderate what appears. His campaign manager Brad Parscale claimed that this justified his decision to end Trump’s advertising on Twitter, despite the platform itself taking the decision to end all political advertising in 2019, something the Trump campaign at the time complained was biased.

My own view is that I do not believe that Twitter would have taken this decision if it were to be a one-off. They will be generating a huge backlash which will only be justified if they really intend to push on and have a similar form of fact-checking for future statements by Trump and other candidates. Whether they will limit their actions to tweets about elections or spread the net further will be closely watched. In the meantime, it is also a shot across the bows for Facebook which has refused to allow its third-party fact-checkers to critique the posts of politicians and other world leaders.

Social Media platforms are moderating Covid-19 content and should learn lessons for elections

Social media platforms have been very quick to act to limit the spread of false information about Covid-19 on their platforms. And they have gone further by prioritising content provided by expert sources within their algorithm.

So if they can do it for Covid-19, why can’t they do it in other areas such as elections? Why do platforms such as Facebook still insist on ‘letting users judge truth for themselves’.

So when it comes to elections and political issues, Facebook hands over control to third-party fact-checking organisations. They have limited powers and even smaller budgets. They can’t for instance, get involved in the the statements made by elected leaders or candidates. So if President Trump were to say something false on the platform then it would have to stand with the public left to decide for themselves whether it is true or not. Facebook claims that political statements often cannot be judged to be 100% true or 100% false and so it would be wrong of them to try to adjudicate.

And yet when it comes to the virus, there are many supposed facts fighting for attention. Issues such as whether you can catch the disease more than once, how much good facemasks do and whether children can catch or spread the disease are key. And there have been scientists on either side of the debate on each of these, with opinion shifting over time. Yet the platforms are promoting one side of the argument over the other in each of these cases based on what officials such as the WHO and CDC are saying. It cannot only be me who sees a double standard when it comes to the way the virus is treated compared with climate change?

Robyn Caplan at the Brookings Institute makes the point that there is a difference between moderation and mediation. She suggests that it is not just a matter of taking down the fake news, but of trying to understand the truth in a complicated scientific world

Most of us would accept that, when it comes to Covid-19, there is a difference in the knowledge held by Chris Whitty on the one hand and Jo Bloggs down the street on the other, and so it is right that the platforms use their algorithms to make sure we are more likely to see the former than the latter. But with election related material they do not. Indeed, the algorithms go out of their way to reinforce prejudice by promoting content from people like us and people we agree with and reducing the likelihood of us seeing any fact check or an opposing point of view. There is little in the way of debate in your Facebook feed.

Just as with electoral manifestos, there are areas of genuine debate about the virus that social media platforms do not, and should not, get involved with. How lockdown should be implemented and then lifted is a political decision. We can debate it online as much as we do in real life. And even though the science is more determined, it is not exactly agreed with by 100% of the scientists. You only have to look at the outcomes from the government’s SAGE group and the unofficial alternative. But the platforms are prepared to wade into this debate because they know it is their public responsibility to do so.

The UK government has delayed its Online Harms Bill again and today the minister refused to deny that it could be 2023 before it is enacted. With that much time available, surely it is right that the government here at least looks at the opportunities to require the platforms to take the lessons they have learned from Covid-19 and apply them to areas such as elections too.

Facebook names their first board members – they have a lot to do

Facebook has picked the first members of its new oversight board which will guide company policy on issues to do with free speech. The line-up so far is impressive (but you would expect that from Facebook). The question is whether this group will be able to wield enough power to change company policy.

Among the four co-chairs of the group is Helle Thorning-Schmidt, the former prime minister of Denmark. She is joined by two US law professors, Jamal Greene and Michael McConnell, and Catalina Botero Marino, a former special rapporteur for freedom of expression at the Organization of American States. Nobel peace laureate Tawakkol Karman and Alan Rusbridger, the former Guardian editor-in-chief, are among the 16 ordinary board members so far selected and the total board will expand to 40 names over time.

“Our roster includes three former judges, six former or current journalists, and other leaders with backgrounds from civil society, academia and public service,” said Thomas Hughes, the director of the oversight board. “They represent a diverse collection of backgrounds and beliefs, but all have a deep commitment to advancing human rights and freedom of expression.”

So what is the board? Well its main aim is to set Facebook policy and act as the final arbiter in disputes about what should and should not be allowed on the platform. This has been an area which has been decidedly lacking until now but has become more and more important.

Take, for example, Facebook’s policy on political speech and adverts. I’ve written a lot about this in the past. I have criticised the company for failing to have a vision as to how it believes politicians (and issue campaigners) should be able to act, a rationale for why they should be treated differently from others and a robust fact-checking system which can guide users to understand why what they are being told might not be true. As a result, Facebook has become out of line with other platforms and often appears to be making up policy on the hoof.

Facebook currently operates a single world-wide policy on political speech. They allow politicians to say what they want. And they allow political adverts to pretty much do the same. In contrast, ordinary advertisers cannot say things that are untrue and even organic posts can be subject to fact-checking. Given the predominance that the platform holds in the marketplace in many countries, this can allow politicians free rein to lie to the electorate with little chance that alternative points of view – or the truth – will get an airing.

Facebook has also failed to take account of the different election laws that apply in countries around the world. Many of these are out-dated, but the platform hasn’t really grasped the chance to work with legislatures to update laws and make sure that Facebook policies in the territory are in compliance.

So will the new board deal with these issues? We will have to wait and see.

Facebook suggests that Iran targeted Scottish Independence referendum with fake online accounts

Facebook has suggested that Iran was engaged in attempted online election and political manipulation as far back as 2011 and tried to influence the result of the 2014 Scottish independence referendum.

A report by Graphika – which has been allowed access to Facbook’s data – says that there were thousands of accounts and these promoted Ron Paul’s presidential bid and the Occupy movement as well as a Yes vote in the Scottish referendum. The company suggests that such efforts may have been more designed to test the water than significant operations, but that fake accounts linked to Iran’s state broadcaster were promoting messages favourable to that state. Arabic language efforts aimed at Iran’s neighbours were much bigger operations.

Graphika makes clear that many of the posts were entreaties to follow Islamic teachings and amplify state messaging as well as attempts at audience building. It also suggests that for a period some of the fake accounts promoted the arabic language version of Russian broadcaster Sputnik. But there were activities related to elections:

“This activity focused briefly on three main topics: the Republican primaries of early 2012, the Occupy movement of the same period, and the Scottish independence referendum of 2014. In each case, the network used a combination of fake accounts, pages, and groups to push its messaging, with the fake accounts sharing and promoting the pages and groups. Rather than the website-heavy content of later efforts, this was much more based on visuals, particularly cartoons. None of these posts yielded major viral impact, measured in likes or shares, and some of the pages were abandoned after only a few days… Nevertheless, Facebook’s revelation is of historical interest: it provides a confirmed data point on attempted foreign interference in Western democratic exercises as far back as 2012, a full electoral cycle before the Russian interference of 2016.”

Specifically talking about the Scottish Independence referendum, Graphika says:

“None of these posts achieved viral impact, measured in the number of likes, shares, or comments. Typical posts scored a few dozen reactions, sometimes a little over 100. This is not negligible, but it is a long way away from being an effort on the sort of scale that might have had an impact on the referendum. In March 2014, six months before the Scottish referendum, the cartoons page stopped posting, for unknown reasons.”

This report is interesting for two main reasons. First in that it shows that Iran was apparently active in this field before Russia’s Internet Research Agency started. Second, because it confirms the sorts of operations that could be undertaken, although it appears that Iran decided that attempts to use cartoons to influence western elections were not likely to be successful.

WhatsApp introduces virtual social distancing in attempt to limit fake Covid-19 news

The social media platform WhatsApp has taken steps to try to limit the spread of disinformation about the coronavirus Covid-19. The move will also impact other forms of ‘viral’ messages, including those connected to elections.

The change means that a user can only send a ‘frequently forwarded message’ to one of their contact groups or coversations at a time. The previous limit differed depending on the country in which you were based, with a ceiling of five in India but much higher in most other territories. In this context, a frequently forwarded message is one that has been shared more than five times already.

There have already been suggestions that this policy should be taken up by other platforms including Facebook – which could limit the use of the Share button.

This change, if not revoked once the Covid-19 crisis is over, will also have a massive impact on elections. In countries where WhatsApp is the one of the main social media platforms – such as Brazil or India – the spread of fake political news has been particularly difficult to contain as WhatsApp groups are closed and there is no form of register of popular posts, nor is there an opportunity for fact checking. Limiting the sharing of news via the platform will significantly diminish the ability to share both truthful and false information.

However, the move has been criticised by mainstream media outlets. Jamie Angus, Director of the BBC World Service group, tweeted that the idea was counterproductive and called for trusted news sources to be allowed direct access to promote content about the virus directly onto the platform.

Screen Shot 2020-04-07 at 09.47.59

And whilst media like the BBC do have their own platforms of course, reaching out to other audiences via social media is seen by many as being significant during a time of national lockdown. 

Angus’ comments, however, reflect a UK situation where broadcasters like the BBC are widely trusted by the public. The platforms, operating on a global basis, might be wary about handing over any sort of editorial control to state run media in authoritarian countries or to making the decision as to what constitutes a trusted media source in deeply polarized markets such as the USA or Ukraine.

Reading List – 15th March 2020

If you have never heard of the Open Skies Treaty (or fully understood what it means), the possibility that the USA might withdraw is a good excuse to read this short article which explains the treaty and sets out why it would be a mistake for President Trump to undermine it.

 

 

Elections will (coronavirus permitting) shortly take place in North Macedonia and Serbia and are also scheduled for Montenegro in the autumn. Just a month before the first of these, Facebook has extended its political adverts policy to the region.

 

Rather than indicating a definite course of action, amendments to the proposed new Russian Constitution suggest that President Putin is keeping his options open – and keeping oligarchs and the siloviki on their toes.

 

Abysmally low turnout, a six month counting process, rival candidates refusing to accept the result and each declaring themselves the winner. This is the reality of the Afghan presidential election where the US has intervened in each previous contest to declare a winner.

 

Reading List – 2nd March 2020

The Guardian reports on developments in the East African country where power has been dominated by the clan system and where minorities and women have been excluded.

 

The possible impact of the coronavirus on the US election has been raised in a number of quarters. In an op-ed on Wired, Jon Stone suggests that the option of an all-mail ballot in November is not that easy to achieve as US elections are managed by states and counties rather than federally.

However, the very fact that people are thinking about the possible impact and how it can be mitigated this far out from the November polls is encouraging.

 

A court in the USA has ruled that privately owned social media companies such as Facebook and twitter are not covered by the First Amendment – the right to freedom of speech. In a case brought by conservative groups, the court said that the companies have the right to censor material they do not like. I would guess that this one will go to the Supreme Court.

 

Most Americans don’t have confidence in the ability of tech platforms such as Facebook and Twitter to defeat attempts to interfere in elections, according the Pew Research. But the vast majority also think that it is the duty of these companies to do so.

 

 

This is fairly techy in the detail, but this article exposes the security flaws in the sorts of electronic voting machines which are common in the US. There are also a couple of videos where experts explain how they might go about hacking individual machines or the election server.