UPDATE: This long-read has been updated in the light of new developments and comments from a wide range of colleagues around the world. For the newer version, please see here.
Facebook’s election-related struggles are continuing to make the news. This time it is their decision not to seek to police the truth or otherwise of a Donald Trump advert claiming that potential 2020 opponent Joe Biden used leverage over $1bn of foreign aid to Ukraine to persuade the country to push out the official running an inquiry into his son.
CNN has already taken the decision not to air the advert which is says has been comprehensively disproved by journalists and organisations such as factcheck.org. However, Facebook, alongside Youtube, Twitter and Fox News, have all run it saying they do not want to get involved in issues of free speech and that the advert does not violate company policies.
“Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is arguably the most scrutinized speech there is,” wrote Katie Harbath, Facebook’s head of global elections policy to the Biden campaign.
Elizabeth Warren’s fake news advert
In response, Elizabeth Warren, another of the leading Democratic candidates has produced her own Facebook advert claiming that Mark Zuckerberg and Facebook are openly backing Donald Trump’s re-election. This is not true, of course, as she acknowledges, but she says the advert goes to show how the platform’s decision could be used in the coming contest.
In truth, the Facebook decision is broadly in line with the situation in the UK. Political, election and candidate adverts (where they are allowed) are subject to different regulation from washing powder or supermarkets. Whereas regular adverts can be judged by Ofcom or the Advertising Standards Authority (depending on the medium), political adverts are subject only to the broader oversight of courts concerned with obscenity, incitement and defamation.
It also highlights the almost impossible task facing the company. If they choose to intervene and judge the truthfulness or otherwise of political statements (either directly or via third parties) then they will be accused by Trump and others of interfering in the right to free speech. If they do not intervene then they will be labelled as allowing lies and distortion to affect the election.
As a colleague puts it:
“The UK tradition about political advertising has traditionally been a balance between two considerations. One was that normally the truth-value of a political claim was usually harder to judge than with commercial claims; particularly when the political claim relates to things that will happen in the future. Therefore, claims about how much Labour would put up your taxes if they won, a staple of Conservative election campaigning forever, or the abolition of the NHS, were all debatable. Parties rarely resorted to outright, provable lies about observable facts. But also, political ads were excluded from the ban on ‘knocking copy’ – comparative advertising saying your product or shop is best. But there was a presumption that if a party stepped over an ethical (but not regulatory) line, then another party had the right to a response that was stronger than would normally be permitted in advertising.
That sort of worked, although the partisan press subverted it a bit and there wasn’t a level playing field with campaigning resources (the Conservatives got away with some pretty bad scare stuff in 1924 and 1931, while Labour’s dodgy ‘whose finger on the trigger?’ campaign in 1951 met significant blowback, although it still seemed to be effective).
The problem now is that without a unified media the same process where dodgy claim is met with comparative advertising no longer happens, and with what is left of public service media or media of record afraid to adjudicate on truth value it doesn’t get done. And Facebook’s business model probably makes the exposure to comparative, critical takes even less likely.
And alongside the rise of this, there’s the rise in shameless lying – like terrorism, there’s not much a civilised society can do about bad faith actors in positions of political leadership without subverting its own civilised values.
Facebook is different from the Baldwin era press barons who used their power to pursue their own hobby-horses. It’s power without responsibility for rent – pay enough, or have enough power already, and there aren’t really any barriers. In conditions of extreme upper-end wealth and income concentration, it’s a recipe for abuse.”
What could be done?
Could it be different? Certainly. There are many countries where the law states that political adverts and promotions must be truthful in the same way as commercial adverts are. And whilst Ms Harbath’s comments were reflective only of the situation in the USA, they raise a whole set of questions as regards countries which are not mature democracies, where most people get their news from social media and where the rules of elections are simply different from the USA or UK.
In the past, the platform set different rules for advertisers as opposed to regular posters. They banned adverts containing “deceptive, false or misleading content”, a much stronger restriction than its general rules around non-paid for posts. However, Facebook has now announced that it will allow political adverts to run, regardless of falsehoods they might contain, with the exception, perhaps, of posts that contain links to previously debunked third party content.
Clegg sets out Facebook’s view
Nick Clegg, now Facebook’s VP of Global Affairs and Communications, said in a recent speech:
“…we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.”
Clegg goes on to discuss what Facebook refers to as a ‘newsworthiness’ exemption:
“This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.
When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.”
I’m afraid that I don’t quite know what this means. How is Facebook making a judgement about the risk of harm? How will the timing of an election or being at war affect their view? How will the existence of a free press affect the decision and what exactly constitutes a free press – it is often not a binary issue? Katie Harbath told me that:
“In terms of what might cause harm we are looking for things that may jeopardize physical safety. We work with trusted partners across the globe to help us identify if content might lead to real world harm. In terms of if a country has freedom of speech or press we use a few sources, but Freedom House’s rankings are a main one.”
Even if third party fact-checkers are used, they are generally small NGOs. They have limited time and few resources. They certainly aren’t able to deal with every claim made in a political Facebook advert during a big election such as a general election in the UK. They can pick and choose some of the biggest or boldest claims and subject them to scrutiny, but the need to be comprehensive and expert often means that the damage has been done and the conversation has moved on long before a third party group manages to convince Facebook to remove a false claim. If this sort of delayed action is to have any meaning then there needs to be some sort of consequence for the advertiser who makes false claims – either financial penalties or some form of suspension or ban.
Even if a third party fact checker has deemed a video or statement to be false, Facebook does not remove it entirely. Harbath told me:
“We reduce the reach (demote it) and add a treatment on it so people can see it has been marked fake. If a politician were to share content that already has that treatment then the label would remain.”
Donie O’Sullivan of CNN makes the point that:
“Facebook’s argument might be more convincing in a world without the platform. The company has helped to create and enhance ideological echo chambers. Some Facebook users only follow and engage with content with which they agree… Given how the Facebook News Feed is determined by an algorithm and the highly targeted nature of Facebook ads, it is entirely possible that a Facebook user could see a false ad from a campaign and not encounter a post that challenges or corrects it.”
Paid for vs organic content
And then there is the split between paid for advertisements and organic posts by candidates and others. Should there be a difference between how content that Facebook gets paid for are treated and those which are simply ‘community posts’. Harbath told me that content posted by people not directly connected to a candidate or campaign will be subject to fuller moderation than that which comes either from the candidate or a recognised group associated with them.
The bigger issue – who pays?
There is also another, often bigger issue. Who is actually behind the advertising that appears online? Particularly in countries other than the USA. Facebook now requires that all political adverts are labelled as such, but whilst some might be pretty obvious – it is badged with the name of the candidate or party – much comes from otherwise unknown individuals or organisations.
In the UK, as in many other countries, campaigning is allowed by parties and candidates and by third party groups whose spending limits vary according to whether or not they are registered with the electoral commission. But Facebook doesn’t restrict advertising just to permitted participants. It takes the view that responsibility rests with the advertiser to conform with the law. And whilst countries such as India issue formal certificates to candidates which Facebook allows to be uploaded to the site as a guarantee of authenticity, the platform also allows other, non certificate holders, to pay for advertising.
One of the biggest concerns that election-watchers have is that foreign money is influencing elections. Facebook have pretty much shrugged their shoulders at this problem. They told me that whilst they can check for an official identity document from the country concerned and that the funds are paid in local currency and using a local billing address, they have no means of knowing the ultimate origin of the funding. It is also unclear how often such documentation is actually checked. The law in the UK requires that the original source of political funding must be a permissible donor. This can be investigated by the police and electoral commission in cases of doubt. Other countries have similar rules. But Facebook doesn’t even make a statement requiring this from political adverisers.
One size fits all
As I’ve written before, Facebook is not alone in this dilemma. Nor are they doing nothing. But what they have done is pretty much a one-size-fits-all approach, largely based on an American model. They require political advertisers to register and be identified as described above. They also release details of who has paid what amount on a regular basis.
Talking to Katie Harbath, I was told that the company employs 40 teams and 500 full time staff to cover elections across the world. She told me that for each election they start work about 18 months in advance to try to idientify the threats that might be involved and consider whether to reach out to the country’s election commission to discuss these threats. What they don’t do is seek to work with individual countries to ensure that Facebook’s rules align with the laws and regulations of each country. In many cases those laws are pretty out of date and were written for a pre-internet age, but they are still the law. Facebook could also be offering to work with parliaments and election commissions on a joint project to help re-shape the law to make it relevant to the current era. Whilst none of this appears to be happening yet, Facebook tell me they have signed memorandums of understanding with many electoral commissions in Latin America and in India, which is a significant step in the right direction.
It’s also worth pointing out that the 18 month lead time is only any use if the election goes ahead as planned. Snap elections cause additional headaches and, whilst I presume Facebook might have thought in advance about the situation in the UK, I don’t know whether this would be the case in, say, Serbia or Malawi.
Fact checking vs profit
Facebook’s $7bn profits are made possible through the use of as much technology – the agorithms – as possible. Introducing more human beings to fact-check or adjudicate on the validity of political speech only serves to get in the way and cost more. CNN’s Donie O’Sullivan suggests that Facebook might be happy to accept the occasional letter of complaint from Joe Biden in return for not losing Trump’s $20 million of adverts since May 2018, an amount that can only rise as the election approaches. Others believe that the 2020 US elections will be the last in which the platform accepts political advertising.
The end of paid-for social media advertising?
It is worth noting that other social media platforms have taken the decision not to allow political advertising (although they do allow political content). Notably, TikTok has said it won’t accept political ads on its platform, declaring last week that “the nature of paid political ads is not something we believe fits the TikTok platform experience” and that political ads don’t support the platform’s mission “to inspire creativity and build joy.”
Even if paid-for adverts are no longer allowed, it seems improbable that Facebook or any other social media platform will ban politicians or the sharing of political content. And that means they will continue to play a significant role in elections, especially for those people who get most of their news from sociel media.
Technology platforms have a lot on their plate. Deepfakes, disruptive bad actors and co-ordinated inauthentic behaviour were all raised with me as challenges to be faced. But it remains a disappointment that the most high profile of these companies is still leaving the door so wide for illegal manipulation and are not doing more to recognise that many countries in the world operate elections on a model (and electoral regulation) far different from the US system. Here are a few things I think they could do while still adhering to the general principles of free speech:
- Respect the laws of the country Facebook is operating in rather than seeking to impose a single set of (largely Californian) values world-wide. Work with parliaments and election commissions to help to design political advertising rules for the platform that align with the individual laws of that country, and offer to work with these same bodies to modernise election law where it is deficient;
- Require all political advertisers to state that they are the original source of the money paying for the adverts or that they have raised it from permissible sources. Use the platform’s own technology to investigate whether this is the case as much as possible;
- In cases where national or platform rules are broken, have in place a system for financial penalties and/or platform bans and pledge to share all information with law enforcement bodies in the country concerned to enable investigation and prosecution.
- If reliance is to be placed on fact checking NGOs to counter the most egregious cases, then Facebook should be helping to set up and fund a network of such groups across all the democracies where the platform operates.
Ultimately, these suggestions may conflict with free speech ideals and some may worry that protesters in authoritarian states will have their details handed over to the authorities. I acknowledge that these are both legitimate concerns. However I would suggest that this only highlights the need to change the laws in those countries to make them into more mature democracies which embrace legitimate protest and free speech. But there cannot be one rule for us and another for ‘them’. If we want Facebook to take action to ensure that the Russians cannot interfere in elections in the US or UK, then we have to accept that the Russian election authorities will want to enforce the laws that exist in that country too. Facebook could consider prominent links to respected election observer reports (such as by OSCE/ODIHR , the OAS or EU) which highlight shortcomings in a country’s election structures and media freedom.