Reading List – 27th November 2019

John Lough analyses the state of play in Belarus and argues that the West should not write the state off as a Russian backwater, but should take steps to engage more.

 

Tatiana Stanovaya of Carnegie Moscow Center suggests that senior figures in the Russian regime are looking to make themselves indispensible as 2024 approaches and Vladimir Putin’s mandate comes to an end.

 

 

The odd-couple marriage between technocrat reformers and pro-Russian Socialists in Moldova has fallen apart. The question now is whether this heralds a return to power of key players mired in corruption or whether a new reformist ministry can take charge.

 

Are Facebook finally making a move to limit micro-targeting?

The Wall Street Journal says that Facebook is considering taking steps to limit micro-targeting – the practice of allowing advertisers to send individual ads to just a hundred or so users.

If such a move happens then it will be a response to the pressure the platform is feeling from politicians and activists across the world and the moves made by rivals Twitter and Google in recent weeks.

The proposal, according to the WSJ, would be to raise the minimum number of user targets to a few thousand. That would still allow a high degree of granulation – the ability to target ad recipients based on refined characteristics such as personal likes or geographic location. It isn’t the same as Google’s proposal to limit targeting to age, gender and postcode.

If it happens, this is once again a small responsive step from the biggest platform. What we are still missing is the big picture – where do they see their advertising policy being in five years time and how do they respond to the calls from around the world to make political adverts more transparent and, well, truthful. It would be great if Facebook would set out this vision for us rather than scattershot, incremental steps.

Incidentally, this video of comedian Sacha Baron Cohen ripping into the big platforms whilst receiving an award from the US Anti Defamation League is well worth watching in full.

Google cuts back on political adverts

Google will no longer allow adverts featuring deep fakes, micro-targetting or targetting voters based on registered political affiliation according to a letter sent by the company in the US. It will also expand its library of political ads. The decision is being seen as putting further pressure on Facebook.

The Google announcement, in common with other platforms,introduces terms and restrictions based around the current US market and political system. Adverts based on voter affiliation are already banned in the UK and questioning of the census – another restriction – is a US phenomenon. However the measures will apply worldwide and will be put in place before the UK votes on 12th December.

At present, both Labour and Conservatives are investing heavily in Google Ads which appear when users search for the names of other parties. The new policies would not appear to limit such adverts.

“Whether you’re running for office or selling office furniture, we apply the same ads policies to everyone; there are no carve-outs,” said Google Ads executive Scott Spencer in a blogpost. This is widely interpreted as a dig at Facebook which has exempted political adverts from fact-checking.

However, the company has admitted that its resources to check adverts are limited and the number that may be banned as a result of this policy will be small.

Micro-targetting is Facebook’s key advantage and only exists to a very limited extent on Google. Facebook has a vast database of knowledge on each of its users and sells this knowledge to advertisers, including political advertisers. Google will allow adverts based on gender, age and locations as small as individual postcodes – but will not permit other data to be used.

Twitter’s new policy has now been given in more detail and will ban all adverts from politicians and parties and those which are based on specific, even if they come from pressure groups or individuals. The company also aims to ban advertisements aimed at influencing legislation, but not those which refer to generic issues. Campaign groups have claimed that this means that polluting fossil fuel companies can still run ads to promote their products but that campaigners aiming to stop them will be banned.

Google, like Twitter, does not rely heavily on political adverts for revenue, generating around $128m from the US market since June 2018. But the Guardian has revealed that the company has been under-reporting poltical spending in the UK by a large factor. The newspaper claims that Labour was reported as having spent £50 in the week beginning 27th October, but that the actual figure was around £63,000. A smaller discrepancy also existed for the Conservatives. Whilst there is no indication that Labour or the Conservatives intended to mislead regulators, accurate reporting by platforms is vital to enable the Electoral Commission and public to check that parties are not over-spending on the election.

Reading List – 18th November 2019

In a opinion piece in the New York Times, Daniel Kreiss and Matt Perault (the latter of whom is former public policy director at Facebook) offer options for reforming the elections landscape of social media.

 

For anyone who wants to know more about Iran and how the regime there has changed its global outlook and ambitions, this is a good read.

 

Dr Georges Fahmi of Chatham House examines how protesters across the region have adapted their tactics after the experiences of the Arab Spring. He sets out five lessons for those wnating to overthrow the system in their country, notably that it is not all about a rush to replace unpopular leaders through fresh elections – changing the rules and socio-economic structure of society is vital too.

 

This last recommendation is a listen rather than a read. Brookings President John Allen on why autocrats are rising and what to do about it. Defenders of an international liberal rules-based order need to take action to preserve their vision.

 

EU highlights social media manipulation in Sri Lankan election campaign

The EU election observation mission has issued its preliminary statement on Saturday’s Presidential election in Sri Lanka. Overall the mission found the election to be well run, but says that there was an uneven playing field for the candidates resulting from the absence of a finance law and the bias of both state and privately owned media. But there are highly damaging findings on the use of online and social media. I’ve re-produced the relevant section and footnotes below:

“A coordinated distortion of the information environment online undermined voters to form opinions free from manipulative interference.

Thirty-four per cent of Sri Lankans have access to the internet and use smartphones to send and receive information. The digital literacy rate is low, leaving the online discourse prone to manipulation. (65) Constitutionally guaranteed freedom of expression is not explicitly extended to online content (66), and in the absence of all-encompassing privacy and data protection legislation, parties do not declare their use of voters’ personal data, which is collected by mobile applications or campaign staff. (67) Such practice is at odds with international standards. (68)

Among the social media platforms, Facebook is the prime contributor to the crafting of political narratives in the public space and to setting the electoral agenda. (69) The EC had only an informal understanding with Facebook on the removal of hate speech and disinformation. (70) Citizen observers also reported harmful content online to the EC and Facebook. (71) However, Facebook’s reluctance to take action, coupled to high levels of anonymous, sponsored content, enabled a mushrooming of hateful commentary and trumped-up stories that capitalised on long-standing ethnic, religious and sectarian tensions. (72) It continued also during the campaign silence, when Facebook removed only a small proportion of such paid content. (73) This was detrimental to the election and at odds with inter- national standards. (74)

Coordinated dissemination of outright false and/or demeaning information presented in various for- mats and across digital platforms dwarfed credible news threads. Suppression of credible news en- tailed the use of sponsored content on Facebook and coordinated sharing of political memes that sow discord and political gossip, both of which served as a source for multiple posts on political support group pages. In the majority of cases, the SLPP campaign benefitted from this. One such campaign undermined the integrity of postal voting; (75) four narratives capitalised on underlying fears and/or recycled previously debunked information. (76)

The use of algorithms and human curation to mislead the debate on Twitter was observed. (77) It in- cluded a high number of recently registered accounts amplifying certain political messages that also appeared on political support group pages. Ten days before the election, the SLPP further skewed the online discourse by announcing a “sharing” contest for 50,000 subscribers of the SLPP app VCAN. (78) A few professional fact-checking organisations exposed deceptive and false stories, but their staff levels and reach are far smaller than those of political actors, and they were not supported by broadcast or print outlets. (79) Overall, a damaging online environment distorted public debate and curbed voters’ access to factual information on political choices, an important element for making a fully informed choice.”

Footnotes:

65 – Computer and digital literacy are 27.5 and 40.3 per cent respectively (2018). Department of Census and Statistics Sri Lanka; joint declaration on freedom of expression and “fake news”, disinformation and propaganda by recognised international bodies, section 3, says: “States should take measures to promote media and digital literacy”.

66 – UNHRC, Promotion, Protection and Enjoyment of Human Rights on the Internet, 4 July 2018:“[…] the same rights that people have offline must also be protected online, in particular freedom of expression…”.

67 – The VCAN app promoted by the SLPP (Google Play) demands registration with a National Identity Card (NIC); if permitted, it has access to a phone’s geolocation, can read the content of USB storage and view network connections. The privacy and developers’ pages are empty. The use of the VCAN app and enclosed private information by campaigners was confirmed to EU observers in Anuradhapura, Batticaloa, Colombo, Gampaha, Kandy, Kalutara, Kurunegala and Trincomalee.

68 – ICCPR art. 17, “No one shall be subjected to arbitrary […] interference with his privacy”. ICCPR, HRC GC 16 par. 10: “The gathering and holding of personal information […] on computers, data banks and other devices, whether by public authorities or private individuals or bodies, must be regulated by law.”

69 – As of October 2019, there are 6.5 million Facebook users, 1.1 million Instagram, 261,700 YouTube, and 182,500 Twitter users in Sri Lanka. Several million people use WhatsApp. Eight million communicate on Viber.

70 – Facebook has not published data about requests by, or its response to, government authorities in 2019. It notes that without a written agreement with the EC, the timeline for decisions remains at its discretion. The number of removed posts is not public. FB did not consider the EC’s media guidelines applicable to it.

71 – By 6 November the EC complaints centre has registered 102 complaints about campaign violations and hate speech online. 

72 – The EU EOM analysed 340 Facebook pages supporting one of the two leading candidates, including 167 prominent in one district. Twenty-six of national-level and 10 per cent of local-level pages featured sharply negative content. On 25 of the most popular meme pages, the EU EOM identified 47 political memes with a menace, including sectarian undertones. The EU EOM assessed public videos by ten of the most subscribed to YouTube influencers (more than 20,000 followers each) and identified at least 10 cases of sharply divisive rhetoric, including two with a racist message.

73 – On 14 and 15 November the EU EOM identified at least 300 sponsored posts/adverts with campaign content. CSOs identified 700 such sponsored posts, FB removed less than half of it.

74 – ICCPR, HRC GC 25, par. 19: “Voters should be able to form opinions independently, free of violence or threat of violence, compulsion, inducement or manipulative interference of any kind.” See also the joint declaration on freedom of expression and “fake news”, disinformation and propaganda, sec. 4 ‘Intermediaries’.

75 – On 31 October an anonymous fan page, Iraj Production, posted “postal voting” results featuring the SLPP’s victory with 95 per cent of votes cast. The post cited the EC. It was shared and re-shared 30,000 times. The post was also shared by a FB page promoting the SLPP and serving as “a mother page” for sharply negative and manipulative content against the UNP. FB closed the fan page on 6 November. The “landslide victory of the SLPP in postal voting” was repeated in rallies.

76 – These include the UNP candidate pictured with a Muslim doctor falsely accused of sterilising 4,000 Buddhist women (de- bunked in July 2019), a claim that the SLPP candidate is supported by a Muslim politician, who, in turn, is falsely associated with “masterminds of the Easter bombings” (debunked in July 2019). The posts and sponsored content also capitalised on anti-foreign sentiments by falsely stating that the government was giving away 18 per cent of Sri Lanka’s land by signing an agreement with US.

77 – From 4 October to 4 November the EU EOM downloaded all tweets trending the neutral electoral hashtags #PresPollSL and #prespolls2019. Out of 2,000 accounts 500 were randomly selected for assessment. Nine per cent were established less than four months before the election, 4.62 per cent were deleted by Twitter, 28 per cent shared only negative content, and 35 per cent were only re-tweeted.

78 – By following the application, the EU EOM observed automated and concerted activities, including multiple sharing of the same post at odd hours (3:30am, 4:22am) but from different profiles which could indicate the activity of bogus accounts. 

79 – Three fact-checking projects employ a robust news/photo verification methodology. They have debunked 74 false sto- ries/statements/images by and about political figures. These included a claim of a politically motivated fight that followed a Premadasa rally, the arson of houses belonging to Rajapaksa supporters, and a letter from a cardinal opposing a big govern- ment agreement with US (the Millennium Challenge Corporation, MCC). The debunking of false postal voting results was shared just 17 times as opposed to the 30,000 shares of the false news post itself. The EU EOM also identified an organisation that was impersonating a fact-checking organisation. It was featured in the mainstream media as a guardian of fairness in online campaigning, while in practice it was striving to discredit non-partisan fact-checkers.

Georgian Parliament fails to pass promised electoral reform as almost half of government MPs oppose their own bill

A measure to implement a promised change to Georgia’s electoral system has failed to get the necessary 75% majority in Parliament as a significant number of governing party Georgian Dream MPs failed to back the move. Opposition and civil society activists are up in arms and more protests may be on the way.

It’s rare that people take to the streets to protest in favour of electoral reform and rarer still that a government actually feels threatened by such protests. But that’s what happened in Georgia over the summer and it resulted in a promise by Georgian Dream to change to a fully proportional voting system from the current mixed system.

The proposal was to do away with the majoritarian (First-Past-the-Post) element and have all 150 MPs elected from a party list system with coalitions banned and no threshold. To pass, the measure needed the support of at least 113 of the 150 MPs.

When it came to voting, just 57 of the 106 Georgian Dream MPs backed the move with many of the 93 who had proposed it voting against. The opposition has suggested this was a stitch up. In total, there were 101 votes in favour and 3 opposed – all Georgian Dream MPs. All opposition and independent MPs backed the change.

Minutes after the vote, seven Georgian Dream MPs who voted for the bill announced they had left the parliamentary majority in protest, including Vice-Speaker and member of Georgian Dream’s political council Tamar Chugoshvili.

As soon as the decision was announced, protesters began gathering outside parliament, with a larger demonstration planned for the evening. There was a minor confrontation with police after protesters attempted to block the road outside parliament as MPs were departing.

According to OC Media:

“Transitioning to a fully-proportional electoral system was a key concession made by the government in the wake of mass protests in the capital in June.

Thousands took to the streets on 20 June, after Russian MP Sergei Gavrilov was invited to address parliament from the speaker’s tribune. A violent dispersal of a demonstration outside parliament the following day led to further protests.

Thousands continued to protest outside parliament for weeks after, forcing the resignation of Parliamentary Speaker Irakli Kobakhidze.

Reacting to the vote, Shota Dighmelashvili, one of the leaders of the youth-led For Freedom group, which spearheaded protests following 20 June, wrote on Facebook urging people to again take to the streets.”

The next Parliamentary elections in Georgia are due to take place in 2020. As things stand, it appears the system used will not be changing. At the last elections in 2016, Georgian Dream won 71 of the 73 constituency seats and 44 from the list element on 48% of the overall vote. Two opposition parties secured the remainder of the list seats whilst one minor party and one independent candidate won the remaining constituencies.

Facebook, free-speech and political advertising

I previously wrote a long-read about Facebook and the issues they face around elections, political advertising and fact-checking. Since that time, there have been a number of developments (and there will undoubtedly be more) and have received a large number of helpful comments from colleagues around the world. I have therefore chosen to update this piece and republish it here. Many thanks to all those who have assisted me and I welcome further comments. Except where attributed, responsibility for what is written is mine.

 

Facebook’s election-related policies continue to make the news. The issue was highlighted by their decision not to seek to police the truth or otherwise of a Donald Trump advert claiming that potential 2020 opponent Joe Biden used leverage over $1bn of foreign aid to Ukraine to persuade the country to push out the official running an inquiry into his son. 

CNN took the decision not to air the advert which it says has been comprehensively disproved by journalists and organisations such as factcheck.org. However, Facebook, alongside Youtube, Twitter and Fox News, all allowed it to air saying they do not want to get involved in issues of free speech and that the advert does not violate company policies.

“Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is arguably the most scrutinized speech there is,” wrote Katie Harbath, Facebook’s head of global elections policy to the Biden campaign.

 

 ELIZABETH WARREN’S FAKE NEWS ADVERT

In response, Elizabeth Warren, another of the leading Democratic candidates produced her own Facebook advert claiming that Mark Zuckerberg and Facebook are openly backing Donald Trump’s re-election. This is not true, of course, as she acknowledges, but she says the advert goes to show how the platform’s decision could be used in the coming contest.

 

FACEBOOK’S DILEMMA

In truth, Facebook’s decision is broadly in line with the situation in the UK. Political, election and candidate adverts (where they are allowed) are subject to different regulation from washing powder or supermarkets. Whereas regular adverts can be judged by Ofcom or the Advertising Standards Authority (depending on the medium), political adverts are subject only to the broader oversight of courts concerned with obscenity, incitement and defamation.

It also highlights the almost impossible task facing the company. If they choose to intervene and judge the truthfulness or otherwise of political statements (either directly or via third parties) then they will be accused by Trump and others of interfering in the right to free speech. If they do not intervene then they will be labelled as allowing lies and distortion to affect the election.

As a colleague puts it:

“The UK tradition about political advertising has been a balance between two considerations. One was that normally the truth-value of a political claim was usually harder to judge than with commercial claims; particularly when the political claim relates to things that will happen in the future. Therefore, claims about how much Labour would put up your taxes if they won, a staple of Conservative election campaigning forever, or the abolition of the NHS, were all debatable. Parties rarely resorted to outright, provable lies about observable facts. But also, political ads were excluded from the ban on ‘knocking copy’ – comparative advertising saying your product or shop is best. But there was a presumption that if a party stepped over an ethical (but not regulatory) line, then another party had the right to a response that was stronger than would normally be permitted in advertising.

That sort of worked, although the partisan press subverted it a bit and there wasn’t a level playing field with campaigning resources (the Conservatives got away with some pretty bad scare stuff in 1924 and 1931, while Labour’s dodgy ‘whose finger on the trigger?’ campaign in 1951 met significant blowback, although it still seemed to be effective).

The problem now is that without a unified media the same process where dodgy claim is met with comparative advertising no longer happens, and with what is left of public service media or media of record afraid to adjudicate on truth value it doesn’t get done. And Facebook’s business model probably makes the exposure to comparative, critical takes even less likely.

And alongside the rise of this, there’s the rise in shameless lying – like terrorism, there’s not much a civilised society can do about bad faith actors in positions of political leadership without subverting its own civilised values.

Facebook is different from the Baldwin era press barons who used their power to pursue their own hobby-horses. It’s power without responsibility for rent – pay enough, or have enough power already, and there aren’t really any barriers. In conditions of extreme upper-end wealth and income concentration, it’s a recipe for abuse.”

THE AD LIBRARY

Facebook’s political advert library is a significant step forward as it allows anyone to see any political or issue based advert that has run on Facebook and will extend to keep a record for seven years. So even if an advert was targeted at a small section of the population, if it was registered as a political advert then it will be in the ad library. The decision to include issue based adverts was highlighted by Facebook’s VP of Policy Solutions, Richard Allan, in a column in the Daily Telegraph.

The problem is that it appears so far only to be operational in 35 countries, a small proportion of the number in which the company operates (albeit a majority by population). In addition, it relies on a complaints system to weed out unregistered political adverts, as exemplified by the first example of a political Facebook advert being banned in the UK – or at least the first example that got widespread publicity. This came from an organisation called the ‘Fair Tax campaign’ and contained the claim that Labour’s tax plans would cost everyone an extra £214 per month. It was taken down by Facebook after a number of complaints to the BBC advert watch initiative headed by journalist Rory Cellan-Jones. 

However, the reason for the advert being taken down is that it didn’t comply with Facebook’s registration and ‘imprint’ rules, not for breaching the rule about false claims. So Facebook have stated that if the advertiser registers it can go back up again (at least until a fact checking organisation takes a look at it and thinks differently). That an advert about tax which prominently attacked a major party and was placed during an election period wasn’t picked up by the company’s algorithm as needing registration indicates that there might be others which have slipped through the net.

 

THE HUMAN RIGHT DIMENSION

In a paper for Chatham House, Kate Jones of Oxford University applies a human rights dimension to the issue of online disinformation and political discourse. She argues that there should only be limited reasons why freedom of speech and the freedom to campaign in elections should be denied. If speech which is claimed to be untrue is banned in mature democracies, will this not be taken as carte blanche for authoritarian governments to restrict opposition voices they do not like?

However Jones also argues that the algorithms used by social media platforms already compromise free speech by restricting, or at least prioritising, what users get to see. And by amplifying more extreme positions, they increase tension and anger in a way which suits their business model but distorts public discourse. She also points out that the right to privacy may be being eroded as personal likes and interests as divulged to a platform are then used to help candidates, parties and advertisers bombard users with information they may not have consented to.

 

WHAT COULD BE DONE?

Could it be different? Certainly. There are many countries where the law states that political adverts and promotions must be truthful in the same way as commercial adverts are. The mnost extreme of which is Singapore where false political statements are taken to be a matter for the police and courts. And whilst Ms Harbath’s comments were reflective only of the situation in the USA, they raise a whole set of questions as regards countries which are not mature democracies, where many people get their news from social media and where the rules of elections are simply different from the USA or UK.

In the past, the platform set different rules for advertisers as opposed to regular posters. They banned adverts containing “deceptive, false or misleading content”, a much stronger restriction than its general rules around non-paid for posts. However, Facebook has now announced that it will allow political adverts to run, regardless of falsehoods they might contain, with the exception, perhaps, of posts that contain links to previously debunked third party content.

 

CLEGG SETS OUT FACEBOOK’S VIEW

Nick Clegg, now Facebook’s VP of Global Affairs and Communications, said in a recent speech:

“…we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.”

Clegg goes on to discuss what Facebook refers to as a ‘newsworthiness’ exemption:

“This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.”

I’m afraid that I don’t quite know what this means. How is Facebook making a judgement about the risk of harm? How will the timing of an election or being at war affect their view? How will the existence of a free press affect the decision and what exactly constitutes a free press – it is often not a binary issue? Katie Harbath told me that:

“In terms of what might cause harm we are looking for things that may jeopardize physical safety. We work with trusted partners across the globe to help us identify if content might lead to real world harm. In terms of if a country has freedom of speech or press we use a few sources, but Freedom House’s rankings are a main one.”

Shortly after Clegg made his speech, Facebook founder Mark Zuckerberg outlined what the company is planning to do for the 2020 US Presidential election. His statement comes after the platform revealed that they had taken action against a number of accounts seeking to disrupt the election originating in Iran and Russia.

Zuckerberg admitted that the company was caught on the back foot in 2016 and needs to do more this time around. He says they will do more to secure the accounts of elected politicians, clearly label posts coming from state media organisations and more clearly label posts deemed false by fact-checkers. They also say they will ban political ads aimed at suppressing turnout. Apparently this will also apply to ads originating from politicians – a provision most likely to hit opposition parties boycotting elections.

The significance of Zuckerberg’s statement is that there appear to be some differences with what Clegg laid out. To what extent will politicians who repeat previously debunked claims have their ads banned or suppressed (or even labelled)?

 

FACT-CHECKERS

Currently, Facebook lists approved third-party fact checkers in 54 countries and one region (the Middle East and North Africa). That is a large proportion of the world but clearly not every country in which the platform operates. The number of organisations ranges from one or two in most nations to six in the USA and eight in India.

Whilst the largest network of fact checkers is the AFP news agency (in 36 nations and one region), in many other cases they are small NGOs. In such cases they have limited time and few resources – The UK fact checking organisation has ten full-time staff members. Even the biggest organisations aren’t able to deal with every claim made during a big election. They can pick and choose some of the biggest or boldest claims and subject them to scrutiny, but the need to be comprehensive and expert often means that the damage has been done and the conversation has moved on long before a third party group manages to convince Facebook to remove a false claim. If this sort of delayed action is to have any meaning then there needs to be some sort of consequence for the advertiser who makes false claims – either financial penalties or some form of suspension or ban.

Even if a third party fact checker has deemed a video or statement to be false, Facebook does not remove it entirely. Harbath told me:

“We reduce the reach (demote it) and add a treatment on it so people can see it has been marked fake. If a politician were to share content that already has that treatment then the label would remain.”

The full policy statement is set out here.

Donie O’Sullivan of CNN makes the point that:

“Facebook’s argument might be more convincing in a world without the platform. The company has helped to create and enhance ideological echo chambers. Some Facebook users only follow and engage with content with which they agree… Given how the Facebook News Feed is determined by an algorithm and the highly targeted nature of Facebook ads, it is entirely possible that a Facebook user could see a false ad from a campaign and not encounter a post that challenges or corrects it.”

PAID FOR VS ORGANIC CONTENT

There is also the issue of the split between paid for advertisements and organic posts by candidates and others. Should there be a difference between how content that Facebook gets paid for are treated and those which are simply ‘community posts’? Harbath told me that content posted by people not directly connected to a candidate or campaign will be subject to fuller moderation than that which comes either from the candidate or a recognised group associated with them.

 

THE BIGGER ISSUE – WHO PAYS?

There is also another, often bigger issue. Who is actually behind the advertising that appears online? Facebook now requires that all political adverts are labelled as such, but whilst some might be pretty obvious – it is badged with the name of the candidate or party – much comes from otherwise unknown individuals or organisations. 

In the UK, as in many other countries, campaigning is allowed by parties and candidates and by third party groups whose spending limits vary according to whether or not they are registered with the electoral commission. But Facebook doesn’t restrict advertising just to permitted participants. It takes the view that responsibility rests with the advertiser to conform with the law. And whilst countries such as India issue formal certificates to candidates which Facebook allows to be uploaded to the site as a guarantee of authenticity, the platform also allows other, non certificate holders, to pay for advertising.

One of the biggest concerns that election-watchers have is that foreign money is influencing elections. Facebook have pretty much shrugged their shoulders at this problem. It is certainly the case that they make clear the country from which the advert is broadcast in the ads library but advertisers can take steps to mask their location and the ad library has different levels of functionality depending on the country.

Facebook told me that whilst they can check for an official identity document from the country concerned and that the funds are paid in local currency and using a local billing address, they have no means of knowing the ultimate origin of the funding. It is also unclear how often such documentation is actually checked. The law in the UK requires that the original source of political funding must be a permissible donor. This can be investigated by the police and electoral commission in cases of doubt. Other countries have similar rules. But Facebook doesn’t even make a statement requiring this from political advertisers.

 

ONE SIZE FITS ALL

As I’ve written before, Facebook is not alone in this dilemma. Nor are they doing nothing. But what they have done is pretty much a one-size-fits-all approach, largely based on an American model. They require political advertisers to register and be identified as described above. They also release details of who has paid what amount on a regular basis.

Talking to Katie Harbath, I was told that the company employs 40 teams and 500 full time staff to cover elections across the world. She told me that for each election they start work about 18 months in advance to try to idientify the threats that might be involved and consider whether to reach out to the country’s election commission to discuss these threats. What they don’t do is seek to work with individual countries to ensure that Facebook’s rules align with the laws and regulations of each country. 

The major problem facing platforms is that election law in virtually every country is not consistent with modern campaigning techniques, especially as regards social media. Richard Allan acknowledged this in his Telegraph article but suggested that Facebook will not be taking the lead in trying to fix it. 

“What constitutes a political ad? Should all online political advertising be recorded in a public archive and should that extend to traditional platforms like billboards, direct mail and newspapers? Should anyone spending over a certain amount on political ads have to declare who their main funders are? Who, if anybody, should decide what politicians can and can’t say in their adverts? These are all questions that can only be properly decided by Parliament and regulators.” 

Facebook could instead be offering to work with parliaments and election commissions on a joint project to help re-shape the law to make it relevant to the current era. Facebook tell me they have signed memorandums of understanding with many electoral commissions in Latin America and in India, which is a significant step in the right direction, but only a small one.

One (hopefully) significant step by Facbook was the hiring of Richard Lappin, formerly the Deputy Head of Elections at ODIHR, the election observation wing of the OSCE. If this signals more of a willingness by the company to engage with election observation missions and with election commissions then this can only be good news.

It’s also worth pointing out that the 18 month lead time is only any use if the election goes ahead as planned. Snap elections cause additional headaches and, whilst I presume Facebook have thought in advance about the situation in the UK in advance of our 2019 general election, I don’t know whether this would be the case in, say, Serbia or Malawi.

Certainly the new Facebook policies were not rolled out in time for the 2019 elections in Tunisia and the EU’s election observation mission to that country has made some strong criticisms of the lack of online regulation in that case, noting (apologies for any translation mistakes from the original French):

“Facebook in Tunisia has not developed transparency tools as in other countries, despite calls from civil society. The “advertising library”, an archiving online advertising created by Facebook, usually shows neither the details of spending performed, nor the advertising history. However, these details are often visible to advertisements targeting voters abroad where self-regulatory measures have sometimes been put in place by Facebook.

The Electoral Law prohibits any financing of campaigns from abroad. As of 13 October, the EOM has observed 87 political advertisements in favor of candidates distributed by Facebook pages managed by administrators whose location is either hidden or located abroad. This lack of transparency undermines verification by the ISIE, civil society or citizens, and deprives them of information on the sources and volume of funding for this online campaign.

The Election Law also provides that candidates for the presidential election must submit to the ISIE a list of all their official accounts on social networks. While the candidate Nabil Karoui provided a list of accounts, candidate Kais Saeed said he is not campaigning online except for a website. For its part, the ISIE did not publish the list of these accounts, and as a result voters were not given the opportunity to identify the pages related to the candidates’ official campaign, nor the identity of their directors.”

 

THE END OF PAID-FOR SOCIAL MEDIA ADVERTISING?

It is worth noting that other social media platforms have taken the decision not to allow political advertising (although they do allow political content). Notably, Twitter has decided to refuse political adverts from 22nd November 2019 and TikTok has said it won’t accept political ads on its platform at all, declaring last week that:

“the nature of paid political ads is not something we believe fits the TikTok platform experience” and that political ads don’t support the platform’s mission “to inspire creativity and build joy.”

Twitter’s decision certainly drew attention as it was made in such aa way as to differentiate the company from Facebook. There is an element of virtue signalling as the platform draws comparatively little revenue from this stream. And it may deflect attention from the platform’s problems with bots, fakery and abuse. 

Twitter CEO Jack Dorsey said:

“This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle.”

In his thread explaining the move, he outlined a series of challenges he says that online platforms face, including:

“machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”

“It‘s not credible for us to say: “We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!”

One of the criticisms of banning political adverts is that it favours incumbents. The suggestion is that new parties and candidates without a well known face or name will find it impossible to break through if they are not allowed to buy advertising. Dorsey addresses this, saying he has “witnessed many social movements reach massive scale without any political advertising. I trust this will only grow.”

But whilst Twitter may diverge from Facebook on accepting political advertising, they have copied them almost word for word on the issue of whether they will censor comments from politicians. They have stated:

“Everything we do starts with an understanding of our purpose and of the service we provide: a place where people can participate in public conversation and get informed about the world around them.

We assess reported Tweets from world leaders against the Twitter Rules, which are designed to ensure people can participate in the public conversation freely and safely.

We focus on the language of reported Tweets and do not attempt to determine all potential interpretations of the content or its intent.

Presently, direct interactions with fellow public figures, comments on political issues of the day, or foreign policy saber-rattling on economic or military issues are generally not in violation of the Twitter Rules.

However, if a Tweet from a world leader does violate the Twitter Rules but there is a clear public interest value to keeping the Tweet on the service, we may place it behind a notice that provides context about the violation and allows people to click through should they wish to see the content.”

One might also contemplate whether fewer political adverts on platforms which have taken some steps in the right direction (eg Facebook) will simply mean a proliferation in less regulated spaces online. It is unlikely that candidates and parties will simply abandon online advertising if the social networks prohibit them – and of course if one bans ads, the incentives for the others to use ads increases. This is a collective action problem that follows from the inherent lack of effective regulation over the online space.

Even if paid-for adverts are no longer allowed, it seems improbable that Facebook or any other social media platform will ban politicians or the sharing of political content. And that means they will continue to play a significant role in elections, especially for those people who get most of their news from social media, who tend to be younger, less wealthy and have less formal education.

 

FACT CHECKING VS PROFIT

Facebook’s $7bn profits are made possible through the use of as much technology – the agorithms – as possible. Introducing moreregulation or more human beings to fact-check or adjudicate on the validity of political speech only serves to get in the way and cost more. CNN’s Donie O’Sullivan suggests that Facebook might be happy to accept the occasional letter of complaint from Joe Biden in return for not losing Trump’s $20 million of adverts since May 2018, an amount that can only rise as the 2020 election approaches. Others believe that the 2020 US elections will be the last in which the platform accepts political advertising. Inevitably, increased transparency measures will reduce candidates’ incentives to use adverts, which in turn will hit Facebook’s profits. In Twitter’s case this might have been judged to be a price worth paying given the (relatively) small income and PR coup they gained from their decision. But for Facebook there is a much larger sum at play.

 

PROPOSALS

Technology platforms have a lot on their plate. Deepfakes, disruptive bad actors and co-ordinated inauthentic behaviour were all raised with me as challenges to be faced. But it remains a disappointment that the most high profile of these companies is still leaving the door so wide for illegal manipulation and are not doing more to recognise that many countries in the world operate elections on a model (and electoral regulation) far different from the US system. 

The Oxford Technology and Elections Commission has reported and made a series of recommendations for actions the UK should be taking to reflect the impact that online campaigning, including social media, has on elections in the UK. These range from an industry implementation of a library system, improved due diligence and imprinting by the parties and verification of social media accounts by the Electoral Commission. Crucially they also recommend that existing financial reporting rules need to be extended to cover all online campaigning. However they do not explicitly require that those who place online adverts declare the origin of the money they spend which I think is a significant shortcoming.

Here are a few things I think platforms could do:

  1. Respect the laws of the country Facebook is operating in rather than seeking to impose a single set of (largely Californian) values world-wide. Work with parliaments and election commissions to help to design political advertising rules for the platform that align with the individual laws of that country, and offer to work with these same bodies to modernise election law where it is deficient;
  2. Roll out the advert library and fact-checking system to every country in which the platform operates and ensure that fact-checkers are funded sufficiently to enable them to do a good job;
  3. As part of the registration process, require all political advertisers to state that they are the original source of the money paying for the adverts or that they have raised it from permissible sources. Use the platform’s own technology to investigate whether this is the case as much as possible;
  4. In cases where national or platform rules are broken, pledge to share all information with election regulators and law enforcement bodies in the country concerned to enable investigation and prosecution.

Ultimately, these suggestions may conflict with free speech ideals and some may worry that protesters in authoritarian states will have their details handed over to the authorities. I acknowledge that these are both legitimate concerns. However I would suggest that this only highlights the need to change the laws in those countries to make them into more mature democracies which embrace legitimate protest and free speech. But there cannot be one rule for us and another for ‘them’. If we want Facebook to take action to ensure that the Russians cannot interfere in elections in the US or UK, then we have to accept that the Russian election authorities will want to enforce the laws that exist in that country too. Facebook could consider prominent links to respected election observer reports (such as by OSCE/ODIHR , the OAS or EU) which highlight shortcomings in a country’s election structures and media freedom.