Winchester Vacancies

Tackling deepfakes and disinformation in elections

The next General Election will be the Deepfake Election. What should be done to regulate it? Alex Goodman KC and Joseph Harrod set out a way forward.

Last week, Javier Milei was elected as president of Argentina after an election in which both candidates were reported to have used artificial intelligence. Milei, the right-wing candidate, is said to have deployed AI images depicting his rival Mr Massa as a Chinese Communist leader.

On 9 October 2023, Sir Keir Starmer became the first senior British politician to find themselves the subject of a ‘deepfake’ audio clip circulating widely on social media, in which he appeared to berate a junior staffer for losing some equipment. The clip was quickly flagged by journalists, technical experts and even Conservative MPs as a deepfake created using voice mimicry software powered by Artificial Intelligence to overlay Starmer’s voice on a track recorded by an X user El Borto. Ultimately the furore did more to draw attention to the dangers posed by sophisticated deepfake media created using AI, than it did to discredit Starmer.

Bad actors leveraging software like photoshop to mimic or distort reality have played a part in global politics and geopolitics for years. However, recent advances in generative AI allow much more realistic mimicry, as well as rapid low-cost generation of deepfake images and video. It is likely that this trend will accelerate from 2023.

Machine learning models can even be set up to test several versions of a deepfake image or media clip, seeing what gets the best response and disseminating deepfakes and disinformation at volumes well beyond human capacity. With general elections due to be held within twelve months in America, India, the UK, the EU, Indonesia, and many others, there is a pressing need to control deep fakes and disinformation around elections. If this problem is not addressed with urgency, there may be no credibly elected legislatures left.

Deepfakes in recent elections

Detailed interrogation of Milei’s use of artificial intelligence has not yet been undertaken, but Argentina’s election is far from the “first'' AI Election.

Donald Trump Jr recently shared a deepfake of CNN Anchor Anderson Cooper declaring "That was President Donald J. Trump ripping us a new asshole here on CNN's live presidential townhall," with the comment “I’m told this is real… it seems real and it’s surprisingly honest and accurate for CNN… but who knows these days.”'. Meanwhile, Ron de Santis has posted deepfake videos of Donald Trump of him embracing Dr Anthony S. Fauci.

In India, the Bharatiya Janata Party raised controversy in 2020 elections when it used deepfake technology to promote one of its own candidates by making a fake video of him speaking fluently in a Hindi dialect common among prospective electors. Deepfakes have also been in use for hostile purposes since at least 2019. For example, a 2019 video was designed to make Nancy Pelosi appear slurred and ill. More recently, a deepfake video of an explosion at the Pentagon briefly caused a dip in stock markets. The European Commissioner Thierry Breton’s recent letter to Meta founder Mark Zuckerberg calling on Facebook and Instagram to better moderate the spread of disinformation surrounding the Israel-Hamas conflict, specifically mentions deepfake content.

Forms of disinformation and deepfakes can be expected to influence the next General Election in the UK. A particular difficulty arises in the UK with deepfakes that evade the regulation of comments about candidates, and the regulation of material from political parties, but are nevertheless capable of influencing popular opinion.

Consider this scenario. An asylum-seeker who has recently arrived by boat has committed a horrendous crime; the appalling details of which are spread across social media. For some days it becomes the central narrative of the election campaign. Then a hotel housing migrants is burnt down, perhaps by a far-right group. The anti-immigrant agenda is the only show in town. But within three days it is clear that the original allegations were largely fake: the perpetrator wasn’t an asylum-seeker, and he didn’t arrive by boat. The narrative is now out of control and truth has no footing. Perhaps, contrary to current polling suggesting a comfortable Labour victory, the Conservatives pull off an improbable hung parliament.

Why do we feel this is not how an election should be run? It offends our sense of the fairness which should underpin an election in a democracy. British legislation has long tried to address this corruption. Section 22 of the Bribery at Elections Act 1842 made it a criminal offence of “corrupt practice” of “treating” if he provides “any meat, drink, entertainment… for the purpose of corruptly influencing that person … to vote or refrain from voting”. The current section 114 of the Representation of the People Act 1983 is derived from s.22 of the Bribery at Elections Act 1842 and still prohibits such corrupt practices. Section 114A of the 1983 Act prohibits a person from exerting undue influence – a crime that includes “placing undue spiritual pressure on a person” and “doing any act designed to deceive a person in relation to the administration of an election”.

Our electoral laws therefore already embody the value that an election should be fair and honest. They already prohibit the pressuring of people’s spirits, the deceiving of people’s minds and the corruption of their will. The problem is not that we do not value the integrity of elections, nor is it that parliament does not have the intent to preserve it. Our electoral laws are intended to combat the corruption of fair elections, but they are aimed at mischiefs and technologies of a different kind to the deepfake elections that we are now experiencing. The means of combating corruption have not kept pace with technological innovation.

How do we combat AI-led electoral interference?

Here is a solution.

First, we need new laws, but we should not allow parliament to get caught in the traps of re-arguing principles about the right balances between free speech, regulation of media and fair elections. The balances have been struck and principles set by existing legislation. Re-opening those issues will inevitably result in years of delay promoted by vested interests: Social media platforms have demonstrated themselves to be adept at obstructing and watering down primary legislation aimed at holding them legally accountable.

Criminal law cannot be the primary way forward. The particular problem posed by those who exploit generative AI to create deepfakes is that automated misinformation can be generated at a near-limitless pace and volume. AI is also capable, as the Cambridge Analytica scandal showed, of deploying highly targeted psychological and profiling data and it can learn the best way to reach its target audience. The prohibition of corrupt practices by criminal offences is too slow, too retrospective and too limited to combat the scale and speed with which AI can overwhelm not just one constituency, but the entire electorate. In the highly unlikely scenario that the mastermind of the deepfake is ever found, he would be prosecuted in many years time– long after the damage is done.

Current laws in the UK offer little by way of powers to forestall or inhibit modern forms of manipulation. The Prime Minister of the UK has been reported to be considering a light touch requirement for labelling of AI generated material. Still, as of yet, there is no legislation specifically directed to deepfakes and misinformation in elections.

We instead suggest an update to the existing rules which govern impartiality in broadcasting. Section 92 of the Representation of the People Act 1983 prohibits attempts to circumvent those rules by broadcasting from abroad. Yet the legislation does not extend its reach to social media platforms. You can sit watching a smart TV flicking between the regulated BBC and unregulated Youtube. This is anomalous and outdated. There is nothing controversial about suggesting that these innovations should be regulated: we already have laws that attempt to regulate the fair conduct of elections, they are merely outdated.

Thirdly, there needs to be means of enforcement. The old system of retrospective criminal punishments, election petitions and the (incredibly slow) Information Commissioner are not fit to challenge the threat to democracy from AI and artificial intelligence. There will need to be powers to act at great speed to halt the spread of misinformation and deepfakes. There may be complex issues involving the navigation of both commercial confidentiality and – as with Cambridge Analytica– unlawful misuse of personal data. The Electoral Commission, Ofcom or other existing regulators might be given enhanced powers and resources, but it is probably better that a new specialist body with technical understanding and expertise is established. Existing models for such an organisation might be adapted. For example, the Independent Panel on Terrorism in the UK provides a model for judicial oversight of confidential material that might be used for the inspection of matters like AI tools, example outputs and data about the use of these tools.

The likely response of social media platforms

The adoption of a panel model and injunctive powers will be resisted by social platforms. Nevertheless,they will work with the outputs if they can be persuaded that the intent is impartial and beneficial to users. The revenue generated by people paying to promote deepfakes is negligible, but there is no doubt that outrage and anger keep people on sites for longer than neutral and accurate content. However, the platforms seem to understand that too much confusion and chaos does ultimately reduce user numbers, and the willingness of advertisers to appear on the platform. Take the rapid decline of advertising revenue for X within a month of Elon Musk’s takeover, for example. Cooperation with a fast-moving injunctive panel, and the learnings they will derive from cases flagged through that body, should be welcomed or at least utilised and acted upon by Meta, YouTube, X, TikTok and others.

Any cooperation from social media platforms with our proposed solution will come from helping them to swiftly and accurately prevent threats to the safety and enjoyment of their community. Social networks are genuinely afraid of losing users by creating a toxic environment. The recent FIFA Mens’ World Cup provides a recent example of Meta, YouTube, X and TikTok embracing external technical and legal solutions from third parties to help rapidly moderate toxic behaviour.

In the longer run, the harder, but equally necessary part of updating the law will be introducing liability for social media platforms to halt the dissemination of misinformation or deepfakes.

Another option might be that the panoply of existing public regulators of equality, finance, telecoms and competition might have their remits expanded. The Equality and Human Rights Commission could be resourced on an ongoing basis to combat racial and other biases in AI. The current UK system of the Information Commissioner and Information Tribunal needs to be adapted to meet the specific challenges of AI. Police and other agencies will also need to be empowered and resourced to investigate criminality.

But if our democratic processes and our legislatures lose their legitimacy next year because we end up with governments elected on the back of deepfakes, all this will be made much more difficult. The government has shown apparent interest in artificial intelligence, but seemingly much less in regulating it. There is a small window of opportunity for the UK to take a global lead in trying to safeguard our democracies.

Alex Goodman KC is Joint Head of Public Law at Landmark Chambers practising in law of elections, human rights and public administration, with a particular interest in artificial intelligence.

Joseph Harrod is Chief Operating Officer at Signify, an ethical data science company which he was part of founding in 2017 and which uses data science and Artificial Intelligence to give politicians, brands and media owners a clearer idea of the issues that really matter.

This article first appeared on Landmark Chambers' Public and Administrative Law blog.