AI deepfakes and EU politics: is there a threat to democratic procedures?
February 28, 2024

AI deepfakes and EU politics: is there a threat to democratic procedures?


Author: Ukrainian YEA Nazar Syvak

Technology has developed drastically in the last decade. If previously, it would take a whole team of professionals to generate or modify audio-visual materials, right now, almost anyone with internet access can use artificial intelligence (AI) to create a convincing video or photo. People mainly use this new technology for entertaining and harmless purposes like fixing a background on their private photos or editing videos for their social media channels. However, there is a growing concern about AI being used to create deceiving materials, also known as ‘deepfakes.’ Deepfakes are often used by internet trolls and criminals, but can those pose a threat to democratic procedures if used by political actors? Many European citizens share this concern, with 70% of citizens in the UK and Germany and 57% in France saying they are concerned about the threat that deepfakes and AI pose to elections. So, how have AI deepfakes been exploited already? What threat do they pose to the 2024 election season, and how does the EU address the issue?

AI deepfakes and EU politics

The first major study on deepfakes in European policy commissioned by the European Parliamentary Research Service was released in July 2021. It acknowledged the potential for misuse of AI deepfake technology and assessed “the technical, societal and regulatory aspects of deepfakes.” The report provided policy recommendations and analysed the legislative framework on AI to outline policy options to limit the risks of deepfakes while harnessing their potential.

Since then, there have been numerous major cases of deepfakes generated by AI being used to influence politics within the EU and Europe. The first instances of AI deepfake use targeted EU foreign policy and social tendencies and were employed by pro-russian[1] actors.

For example, in March 2022, a deepfake video of Ukrainian President Volodymyr Zelenskyy announcing Ukrainian capitulation was posted online. Later, in June 2022, a deepfake of the mayor of Kyiv was generated to trick European politicians during video conference calls, including the mayors of Berlin, Madrid, and Vienna. These were targeted both at Ukrainian and international audiences to spread uncertainty and confusion about the situation in Ukraine during the initial stage of the russian invasion and reflected the russian irregular warfare strategy.  While the deepfakes did not have a significant impact on EU-Ukraine relations and EU support for Ukraine, it became evident that deepfakes were being developed enough to be used as a tool for information operations by malicious actors.

However, 2023 brought about a new challenge – the use of AI deepfakes as a tool to deceive society and influence politics and elections in European countries. The first major case occurred in August 2023 during the Polish national elections. The Civic Platform party was caught using AI-generated clips of words ‘spoken’ by Prime Minister Mateusz Morawiecki mixed with real videos in their campaign advertisements without having a disclaimer in the ad. After facing public criticism, the party released a statement confirming that the material was artificially generated. Later, in October 2023, the UK’s opposition Labour Party leader, Keir Starmer, was targeted in a deepfake campaign. An audio clip of him swearing at staffers went viral and gained millions of hits on social media.

The most prominent scandal of AI use occurred in Slovakia in October 2023. Two AI-generated voice recordings of the Progressive Slovakia party leader surfaced on the internet in which he discussed a plan to rig the election, partly by buying votes from the Roma minority, as well as a proposal to double the price of beer after winning the election. Before that, an opposing party used his AI audio impersonation in a video ad, but it had a declaration that the voices were fake towards the end of the clip. Progressive Slovakia eventually lost the elections, and no penalties were introduced against parties employing AI deepfakes.

AI deepfakes and the 2024 election season

2024 will bring about major elections in the EU and worldwide. Nearly 65 major elections are scheduled, including the European Parliament elections, involving more than a quarter of the global population. The ‘Year of Democracy’, as 2024 is now being referred to, is dangerously susceptible to the use of AI deepfakes for social manipulation to influence election outcomes.

Many experts suggest that “Deepfakes” could become the “fake news” of 2024. AI has become widely available, and deepfakes are easy, quick, and cheap to create. The use of deepfakes during elections could overwhelm the election infrastructure and fact-checking process of journalists. The sheer number of generated fakes could prevent the media and society from effectively detecting factual and fake information. Moreover, while photos and videos are relatively easy to fact-check, voice recordings take much time and expertise to verify. This is especially crucial in the last days leading to elections, as happened in Slovakia. The AI-generated fakes were posted during a 48-hour moratorium ahead of the polls’ opening when no political campaigning was allowed. This did not allow enough time for experts to examine the recording and for the targeted representatives to effectively debunk the accusations.

Deepfakes could also cause distrust in information and data. Citizens could get confused with the influx of AI-generated materials, leading them to question legitimate materials. Malicious politicians could deflect any accusations as AI manipulations, while other political candidates would be flooded by deepfakes. 

On the other hand, some experts are sceptical of the potential of AI deepfakes to threaten democratic processes. They claim that there is a lot of speculation on the real capabilities of AI, which came into play in 2020 but did not have a significant effect on previous elections.

How does the EU address the problem of deepfakes?

The EU has released many reports on the employment of AI and its malicious use, in part by generating deepfakes. The EU took a serious stance on developing preemptive measures for AI misuse, and in December 2023, introduced the EU AI Act, the world’s first regulation on artificial intelligence. Article 52(3) devoted separate regulations of deepfakes. The act does not outlaw the use of such, but attempts to regulate them through transparency obligations placed on the creators.

The EU cybersecurity agency ENISA is also working on developing a framework on tackling AI-generated manipulations. It believes that the development of cybersecure infrastructures and the promotion of the integrity and availability of information is one of the main ways to tackle the issue of AI during the elections. Moreover, the EU has started working with external stakeholders to create deepfake-detective software. These include Meta, X and TikTok, which will be required to identify AI-generated content and label it in accordance with the Digital Services Act (DSA). Lastly, the EU has started working on creating resources for citizens on how to detect and approach deepfakes. Citizen awareness is a crucial part of protecting the upcoming European elections from disinformation and by equipping the electorate, the EU can help voters to make a conscious choice, free of manipulations. 

AI-generated content is a new phenomenon that has not yet been fully studied. While it is a beneficial tool for a society that brings new possibilities, AI deepfakes pose a serious threat to the democratic process. The EU stands at the forefront of developing comprehensive legislation and cybersecurity measures to address the AI threat, but 2024 will become a test for the effectiveness of those and will show the true potential of the AI-generated content and its impact on the election campaigns in the world.


[1] I choose not to capitalise the country ‘russia’ or its adjective ‘russian’ as a way of showing support for Ukraine through written language. The atrocities committed by the russian regime and its supporters call for its non-recognition and isolation from the international community; hence, the symbolic choice to use an uncapitalised ‘r’.




Interested in the latest news and opportunities?

This website is managed by the EU-funded Regional Communication Programme for the Eastern Neighbourhood ('EU NEIGHBOURS east’), which complements and supports the communication of the Delegations of the European Union in the Eastern partner countries, and works under the guidance of the European Commission’s Directorate-General for Neighbourhood Policy and Enlargement Negotiations, and the European External Action Service. EU NEIGHBOURS east is implemented by a GOPA PACE-led consortium. It is part of the larger Neighbourhood Communication Programme (2020-2024) for the EU's Eastern and Southern Neighbourhood, which also includes 'EU NEIGHBOURS south’ project that runs the EU Neighbours portal.


The information on this site is subject to a Disclaimer and Protection of personal data. © European Union,