Reading List – 10th September 2020

After a few weeks off, here is some catch up reading on key issues:

What Russia Really Has in Mind for Belarus – And why Western leaders must act

A look at one possible plan that President Putin has for his troubled neighbour and how it migh unfold. The authors argue that rather than an army of Little Green Men, there are a load of Little Grey Men inside Belarus gradually moving it closer and closer to Russia.

Democracy After Coronavirus: Five challenges for the 2020s

This is a long read – a great study on the difference that the pandemic has made to elections and the wider issue of democracy. In this regard, this paper argues that there are five main challenges for democracies after coronavirus: protecting the safety and integrity of elections, finding the right place for expertise, coping with resurgent populism and nationalism, countering homegrown and foreign disinformation, and defending the democratic model.

Lessons learned with social media monitoring

Another long read – this time a look at how different domestic election observers have tried to tackle the task of monitoring what is being said and by whom on social media. Regular readers will know this is a keen area of interest of mine and this paper sums up the great work done by the various NGOs as well as the frustrations they face.

Facebook announces Voting Information Center in effort to register 4 million new voters

Facebook have launched a big push to register more peoople for this autumn’s US elections. Among the tools they have created is a ‘Voting Information Center’. From this summer, anyone logging into Facebook, Messenger or Instagram will see a banner advertising the function. Facebook claim they helped 2 million people to register in 2016 and in 2018 and they want to double that number this time.

2_VotingInfoCenter_FBThe Information Center will have information about registering to vote as well as absentee or postal votes, depending on the particular rules of the state they live in.

In addition, Facebook has finalised their opt-out system for political adverts. Users will be able to toggle a switch to block all political and issue based adverts – anything that has a ‘paid for by…’ label. That’s fine, but it is a blunt instrument. There is no ability to choose only to block certain adverts. And it will be interesting to find out (if they will tell us) how many users take up this feature. The good news is that this feature will slowly roll out across other countries that have an advert register.

There are also a couple of small tweaks. The ‘paid for’ disclaimer that indicated a post was an advert used to disappear when an ad was shared. Now that label will stay on the post. Finally, the platform is tracking the amount spent by political contest so that users can identify better what money is being spent where, not just by who. Hopefully that feature will roll across to countries where campaign finance is more tightly regulated as soon as possible.

So, as you might expect, I have a number of concerns about this scheme, even if the overall proposal is very welcome:

  • First, however big and bold they are making it seem, this is still not the grand vision that Facebook has been lacking for so long when it comes to political posts, adverts and electoral interference. Until we know what their long term gameplan is, they will continue to fiddle around the edges.
  • Second, once again we are looking at a big initiative rolled out for a US election. There is absolutely nothing to indicate when such provisions might be made available in the 150+ other countries in which Facebook has a major influence on voters. Yes – the US election is the biggest single contest this year and Facebook is based there. But having a completely America-centric view on things is deeply damaging to the platform’s reputation in many other countries.
  • Third is what is not being said. Facebook is claiming: “By getting clear, accurate and authoritative information to people, we reduce the effectiveness of malicious networks that might try to take advantage of uncertainty and interfere with the election.” My fear is that they will use the existence of the Information Center as an excuse for not acting as they should when leading figures break the platform rules. A month ago President Trump had a post tagged on twitter because it was deemed that he was aiming to spread mistrust in the election system. This was about the only area in which most platforms are prepared to act (although Twitter also censored a post which it claimed was glorifying violence). This week he has again claimed (without justification) that ‘Democrats will stuff ballot boxes with thousands of fake votes’. That, again, is a post aiming to spread mistrust in the election and should have been blocked. But it hasn’t been. If Facebook starts pointing to the information center as the reason they aren’t taking down such posts when they appear on their platform then they will have failed voters rather than served them.

Who Targets Me launch ten rules to guide online political adverts – and they are good!

Who Targets Me is a campaign group that aims to lift the veil from online political advertising. They have developed a plug-in which people can volunteer to use which means the group can see who is receiving targeted political adverts on social media. Because of the targeting, it is often very difficult for anyone who has not been sent the advert directly to see it.

The group is also campaigning to institute better rules to govern the conduct of political advertising. You may well have seen my posts and thoughts about the need for better rules, and I like what WTM have done with their thoughts.

I would encourage you to read the full post here, but I’m going to take the liberty of posting the rules they advocate and a bit of their thinking below.

In essence, the group believes that it would be wrong to have some form of officially appointed regulator or online adverts. Such a body would be expensive, slow and only able to handle a tiny percentage of the adverts published each year. In addition, their decisions would become politically contentious.

Instead, they are proposing rules which would reduce the way in which advertising can be abused, preserve freedom of expression and targeting and preserve public confidence. Their ten ideas are:

  • Collaborate to define what is ‘political’.
  • Require maximum transparency for political advertising.
  • Force strong verification.
  • Make advertisers earn the ‘right’ to advertise.
  • Allow fewer ads.
  • Make ‘ads’ ads again.
  • Introduce a blackout period for political advertising.
  • Ensure these measures are ‘always on’.
  • Enforce the rules and increase the penalties for breaking them.
  • Update the rules regularly, transparently and accountably.

In their article they list the reasoning for each of these proposals and again I would encourage you to read the whole thing.

 

 

WhatsApp introduces virtual social distancing in attempt to limit fake Covid-19 news

The social media platform WhatsApp has taken steps to try to limit the spread of disinformation about the coronavirus Covid-19. The move will also impact other forms of ‘viral’ messages, including those connected to elections.

The change means that a user can only send a ‘frequently forwarded message’ to one of their contact groups or coversations at a time. The previous limit differed depending on the country in which you were based, with a ceiling of five in India but much higher in most other territories. In this context, a frequently forwarded message is one that has been shared more than five times already.

There have already been suggestions that this policy should be taken up by other platforms including Facebook – which could limit the use of the Share button.

This change, if not revoked once the Covid-19 crisis is over, will also have a massive impact on elections. In countries where WhatsApp is the one of the main social media platforms – such as Brazil or India – the spread of fake political news has been particularly difficult to contain as WhatsApp groups are closed and there is no form of register of popular posts, nor is there an opportunity for fact checking. Limiting the sharing of news via the platform will significantly diminish the ability to share both truthful and false information.

However, the move has been criticised by mainstream media outlets. Jamie Angus, Director of the BBC World Service group, tweeted that the idea was counterproductive and called for trusted news sources to be allowed direct access to promote content about the virus directly onto the platform.

Screen Shot 2020-04-07 at 09.47.59

And whilst media like the BBC do have their own platforms of course, reaching out to other audiences via social media is seen by many as being significant during a time of national lockdown. 

Angus’ comments, however, reflect a UK situation where broadcasters like the BBC are widely trusted by the public. The platforms, operating on a global basis, might be wary about handing over any sort of editorial control to state run media in authoritarian countries or to making the decision as to what constitutes a trusted media source in deeply polarized markets such as the USA or Ukraine.

Social media platforms must do more to prevent election attacks

It’s pretty clear that social media attacks have a real potential to affect not just elections, but political life in general. That’s why Facebook’s ‘two steps forward, one step back’ strategy is so disappointing. They – with their subsidiary WhatsApp – are the biggest players in the social media market and they have a responsibility to act. Only when platforms are completely transparent will election authorities be able to act and we, the voters, will have confidence that our elections are not being distorted.

How election regulations work

Different countries have different laws regarding elections and this applies to online campaigning and social media too. In most countries, the principle means of regulating election campaigns is via spending limits – although there may be a range of other controls. Parties and candidates are required to submit a spending return after the election (and sometimes interim returns mid-campaign). They may have to open a dedicated bank account and there may be limits as to who can contribute and how much.

Many countries view day to day non-commercial uses of social media as being essentially free and so they do not fall under the scope of election expenses. Even websites are often viewed as being low cost and are an under-regulated form of influencing votes.

Such ‘free’ uses include:

  • Setting up a Facebook page to promote a candidate or party and gather ‘likes’ for them. People who have ‘liked’ the candidate can then be sent messages and other information. Likers and other users can view live streams of campaign events
  • A twitter account to promote the candidate, to encourage retweets and to retweet others (endorsements, party leaders etc)
  • A WhatsApp account to create groups and to share information among those groups and encourage other group members to forward the information to others.
  • An Instagram account to share images and engage in conversation with followers and others.

There are, of course, many other social media platforms, but they broadly fall into one of these basic use profiles.

Increasingly, social media is also being used to host paid-for advertisements of a political or campaign nature during elections. These may come from parties or candidates themselves and can be positive or negative in nature. Or they can come from third party actors within the country or from outside. Different rules apply in each country with some countries permitting third party groups to spend money campaigning during an election either for or against a candidate or on the basis of issues. And whilst some countries permit funding by citizens living overseas, broadly speaking no country permits out of country election spending by non-citizens.

Why parties use social media

The advantage of social media advertising is that it allows an advert to be targeted at a specific audience. To take Facebook, the company knows enough about its users that it can sell advertising so that it reaches a very specific group. It is easy to target, for example, women aged 24-35 in a particular city. And, the company knows much more than simple demographics. They also know about an individual’s likes and dislikes (quite literally because of the ‘like’ buttons clicked). So Facebook can sell advertising enabling very precise targeting. And because the user data is not shared with the advertiser – they only receive personal information if the recipient of the advert chooses to share it with them – this practice is seen as compliant with data laws around the world.

The attractiveness of social media to parties, candidates and other political campaigners is obvious and not a bad thing. Lots of voters complain they don’t know enough about what politicians or parties stand for, so this means of communication should help. But a platform that allows genuine communication is also open to fake news and outside interference.

unnamedA disclaimer here: As a campaign manager in the 2016 EU referendum, I commissioned and paid for Facebook adverts on a number of occasions. I was able to define the audience I wanted to see these and I thought they were good value for money. I didn’t of course, have access to the private data that the platform used to target that audience. Our advert spending was properly declared to the Electoral Commission.

The Cambridge Analytica/AIQ case is something different. In this case data was harvested for one reason and then given or sold to political advertisers for completely different reasons. Facebook has been shown to have known about this illegal transfer to the extent that they have been fined the maximum amount permitted in the UK. But even if the company acted illegally in that case, it does not currently inhibit the legal act of selling advertising by Facebook and other social media companies.

Recent problems

There have been a number of scandals to hit election related social media in recent years:

  • During the 2016 UK referendum on membership of the EU the Electoral Commission found that the Vote Leave campaign illegally co-ordinated their campaigning with BeLeave by passing on funding which was spent on social media advertising;
  • During the 2016 US Presidential Election, it is alleged that Russia (and possibly China) sought to interfere with the contest through the promotion of fake news and the use of ‘bots’ to spread false information. (Other claims about Russian interference have been made but they don’t come under the heading of social media;
  • During the 2018 Brazillian Presidential Election, it is claimed that fake news aimed at both candidsates has been spread via WhatsApp groups;
  • During the 2018 Macedonian name referendum, it is alleged that many hundreds of websites, Facebook groups and other means were created from outside the country to promote a boycott and therefore to lessen the credibility of the outcome which was expected to be a Yes vote;
  • Allegations of foreign interference have also been made about French, German and other elections in Europe and elsewhere.

In addition to social media, voters may see election related content on news sites, gossip sites, blogs and so on. Frequently, these sites encourage interaction via comments and these are often un-moderated. Whilst parties can campaigns will endeavour to push messages out via these sites – as they do through mainstream media – the comments sections are often the territory where activists and others will seek to promote points of view and stories which are less factually robust.

So what action have Facebook taken?

They have made two significant changes which are broadly positive. They have required that every political advert carries a form of identification so the viewer can see who produced it. However this ‘imprint’ is often not as clear as one might like, providing little real clue as to who is behind it. A recent example are adverts urging constituents to contact therir MP and ‘stand up for Brexit’. A number or groups have produced these and some are clear whilst others are far from.

Second, Facebook will periodically release the details who who has spent what on political advertising. That’s great, but it won’t be linked to specific content.

They have also announced a ‘war room’ to tackle fake news during there EU elections.

On the downside, Facebook appears to have restricted the ability of plug-ins to monitor advertising content. This has hit the Who Targets Me platform even though the use of plug-ins in that case is entirely consensual. So one of the prime investigators of shady political advertising is no longer able to undertake its investigations.

And, as I’ve previously written, WhatsApp in India has restricted the ability for users to forward messages. However this make the spreading of fake news slightly harder rather than eliminating the possibility entirely.

Has fake news swung elections?

It’s impossible to tell. Governments do not like to admit that they might have come to power or their course of action might have been set via a referendum that was fundamentally flawed. And courts and election commissions have been very reticent in declaring a ballot to be void. That is not to say that it has never happened, but these remedies do not appear to be the most reliable.

Whilst in the past a second country (or people based in a second country) might have sought to influence the conduct of an election by means of radio broadcasts and the like, the advent of the internet, and particularly of social media, has made it much easier to seek to influence an election in another country whether through ‘fake news’ or truthful campaigning.

There is also a question as to how much a vote is actually changed by a piece of fake news. In most cases it appears that a voter is likely to cast their ballot in a certain way and the information they choose to listen to or accept (whether fake or otherwise) simply confirms their choice.

And what constitutes ‘fake’? An outright lie or doctored photo such as the one claiming that the former Brazillian President Dilma was a prodigy of Fidel Castro is simple to categorise. But the ‘£350m for the NHS’ slogan on the side of the Vote Leave bus during the UK’s referendum is not so obviously fake. Had politicians decided to do so, they could have made this come true, regardless of the impact of Brexit on public finances. It is fair to point out that the pretext of the claim – that Brexit would make the UK better off – is probably not the case, but we are then into a political debate – something that should not be policed in a heavy handed fashion, if at all.

However, it does seem probable that there have been significant numbers of votes affected by fake news or international campaigning in various elections and that this is something that should be taken seriously. Respected NGOs in various countries have raised concerns about this issue.

Next steps

Governments across the world have been reluctant even to address this issue. But some have and they have chosen different approaches. In the UK, ministers have said that recognising and ignoring fake news is the responsible of the individual. They don’t propose to take any action to stamp it out. France, however, has indicated that it might try to set up an official body to make rulings. The difficulty here is that such rulings are likely to come after the horse has well and truly bolted.

What seems logical as a first step is for platforms such as Facebook to be much more open about who is funding political advertising and what it says to whom. It is not necessarily for social media executives to do the work of electoral commissions, but they need to enable the official regulators to do their jobs properly. If an individual, organisation or even foreign country is trying to influence elections then this should be clear and, if it is against the law, then action should be taken. But until the social media platforms come clean, this can’t happen.

Rise of WhatsApp fuels concerns about Indian elections

The rise in the use of different types of social media in elections has proved both advantageous for parties and worrying for those concerned about the cleanliness of elections.

Different platforms are to the fore in different countries with Facebook the most common app in much of the world. However in India and elsewhere it is WhatsApp that is in the lead. And despite new curbs on the forwarding of messages, its use is deeply concerning to those worried about the spread of fake news.

The advantage of all social media is that they can be used to disseminate information to voters. Parties use them to spread information about their policies and candidates. But they can also be used to spread false information and the end-to-end encryption of WhatsApp means it is almost impossible to know what individual users are seeing.

In India the information being spread is often fake and designed to enflame religious or caste conflict. Groups can contain up to 256 members and Time is reporting that parties are using volunteers to forward messages from group to group. In the past each message could be forwarded to 20 individuals or groups. New rules restrict this to just 5 but this appears to have done little to curb the spread of fake news.

The ownership of smart phones has almost doubled in the five years since the last election and more than four out of five have WhatsApp installed.

Time reports that political messaging is tailored to religion and caste – often easy to do simply by name – and that lax data laws mean that list brokers can offer information such as electricity bills to parties. Higher bills are likely to indicate middle class households with air conditioning.

While parties themselves are unlikely to put their name to inflammatory material, this doesn’t stop baseless claims being spread by supporters and influencers via political groups. The governing Hindu nationalist BJP is said to be in the lead in such tactics, but other parties including the opposition Congress are also using the platforms.