21.02.2024
AI and the 2024 elections: see and disbelieve
The year 2024 comes with a historic record, bringing 4 billion citizens to the polls to elect their leaders and representatives. Elections for various offices will be organized in more than 70 countries, and presidential elections will be held in 23 of them (including the United States and Russia). Romania is no exception, with all four types of elections - local, parliamentary, Europarliamentary and presidential - on the list.
Attempts to manipulate the electorate through various means of propaganda are nothing new; they are almost intrinsically part of the rules of the game. But what is new and far more challenging for the 2024 elections are the new tools of large-scale disinformation facilitated by artificial intelligence. With the help of new technologies, phenomena such as deep fakes have taken on alarming proportions, without the long-term effects even being foreseeable.
If we were impressed by the Dall-E image generator in 2021 and the ChatGPT text content generator in 2022, this month OpenAI (the same company that developed Dall-E and ChatGPT) publicly announced a new project: Sora, a video content generator[1]. Currently available to a small team, Sora is an AI model that can create realistic or fictitious sequences based ontext-to-video modeling(text-to-video modeling). In turn, Google and Meta (Facebook) have expressed interest in developing similar tools.
In this context, concerns about the impact of these new technical possibilities on the conduct of the 2024 elections are justified. Whereas in the past, voter influence techniques required considerable resources and niche specialized knowledge, nowadays all that is needed to create deep fake content is an internet connection and access to an instruction-based content generation tool. Compared to the current and future risks, the Cambridge Analytica scandal may only seem like a meager precursor.
And we are not just talking about hypothetical situations, there are already concrete examples of this. Last month, in New Hampshire (USA), phone calls were reported of bots imitating the voice of incumbent President Joe Biden trying to discourage people from participating in the first round of the elections[2]. Similarly, just days before the October 2023 parliamentary elections in Slovakia, a recording of an alleged conversation between a journalist and Michal Šimečka (leader of the pro-European party Progresívne Slovensko and Vice-President of the European Parliament), in which the latter allegedly talked about committing electoral fraud, circulated online[3]. A few days later his party was defeated and the recording turned out to be a deep fake. Although no causal link can be established between this event and the election result, the example shows how dangerous such practices are, especially when they are used in the last minute of the electoral contest. As there is not enough time for official verifications and denials, the greatest harm is already done: doubt. And finally, another recent example of deep fake portrays Frederick Christ Trump Sr (deceased 25 years ago), father of former President Donald Trump[4], in a video harshly criticizing his son. Importantly, in the latter case, the video mentions at the end that it was generated using artificial intelligence and that The Lincoln Project (a self-proclaimed pro-democracy organization) paid for the content and takes responsibility for it.
In the face of these increasingly serious challenges, major Big Tech companies (including Google, Meta, Microsoft, OpenAI, TikTok, X) signed an agreement a week ago at the Munich Security Conference called "A Tech Accord to Combat Deceptive Use of AI in 2024 Elections". The agreement contains a set of common goals and steps to prevent and minimize the risk of using artificial intelligence to mislead voters.
Being more of a soft law instrument, a declaration of common intentions with no binding effect for the signatory parties, this agreement is a good start in the fight against large-scale manipulation of citizens, but it cannot replace a solid legislative framework with clear rules and concrete sanctions. As fast as technology is evolving, it is hard for legislators to keep up. At European level, although considerable efforts have been made, the most eagerly awaited regulation in this respect (the AI Act) is expected to receive final approval in the European Parliament in April 2024, which would mean that it will enter into force in 2026 (if there are no changes).
At the national level, although debates to ban deep fake have been going on since last year, a week ago, the Chamber of Deputies decided to refer the bill back to the Committee on Culture, Arts, Media and the Committee on Information and Communication Technology. The main criticisms concerned the lack of precision in the terminology used, but also the proposed system of punishments[5]. More specifically, the debates concern the inclusion in the bill of a prison sentence of 6 months to 2 years or the retention of only pecuniary sanctions. Although a crooked law can often do more harm than no law at all, in the current context, the legislator's passivity or slowness in designing a coherent regulatory framework for the use of AI could leave open the way to frauds, the subtlety of which will make them difficult to prove and quantify anyway.
Thus, for the time being, the only weapon left at our disposal against disinformation techniques is critical thinking, careful analysis and evaluation of the information that abounds in all communication channels. At least for the time being, details such as imperfect synchronization between characters' lip movements and the audio content being played, lack of voice inflection or unnatural hand movements are indications that video and/or audio content is being generated using AI.
In the future, it is very likely that these imperfections will be corrected and AI-generated content will become increasingly truthful, in which case training the ability to discern between fact and fiction will have to be (if it is not already) high on the educational agenda.
An article by Angelica Alecu-Ciocîrlan, Associate (aalecu@stoica-asociatii.ro), STOICA & Associates.
[2] https://news.sky.com/story/fake-ai-generated-joe-biden-robocall-tells-people-in-new-hampshire-not-to-vote-13054446
[3] https://www.bloomberg.com/news/newsletters/2023-10-04/deepfakes-in-slovakia-preview-how-ai-will-change-the-face-of-elections
[4] https://www.euronews.com/next/2024/02/19/youre-a-disgrace-donald-trumps-late-father-resurrected-by-ai-to-blast-him-ahead-of-electio
[5] https://romania.europalibera.org/a/32816725.html