Could deepfake disrupt the November elections? How can we effectively manage the spread of deepfakes, and what roles should governments, technology companies, and the public play in safeguarding election integrity?
Deepfakes, AI-generated media that can manipulate visuals and audio, pose a growing threat to election integrity. In the 2024 New Hampshire primary, AI-generated robocalls mimicking President Joe Biden’s voice misled Democratic voters into skipping the election, potentially skewing results and depressing turnout.
It’s a serious concern highlighted by recent incidents. In Slovakia, just before last September’s parliamentary election, an AI-generated audio recording falsely portrayed a political leader plotting election manipulation and making outlandish promises about beer taxes. This misleading content spread rapidly on social media, underscoring how advanced deepfake technology can create convincing yet fraudulent media.
There are not just four or five incidents but a number of incidents happen which underscore a critical issue: as AI technology advances, the tools for creating and disseminating misleading media become more accessible and harder to detect. This escalation has heightened concerns among lawmakers and the public, particularly with the 2024 US election cycle looming. The misuse of deepfake technology not only challenges the integrity of the media but also threatens the foundations of democratic processes and public trust.
What are Deepfakes?
Deepfakes are a type of synthetic media created using artificial intelligence, specifically deep learning techniques. The term “deepfake” is derived from the combination of “deep learning” and “fake,” indicating the AI-driven nature of this technology. Essentially, deepfakes involve the use of neural networks to manipulate or generate visual and audio content that mimics real-life people in a highly convincing manner. This can include altering a person’s facial expressions, voice, or movements in videos, or even creating entirely fabricated yet realistic-looking individuals.
Deepfake technology operates by training AI models on large datasets of real video, audio, and image content. Once trained, these models can generate new content that blends seamlessly with the original, making it extremely difficult to discern any tampering. The result is a synthetic piece of media that can convincingly portray someone saying or doing something they never actually did.
Evolution and advancement of deepfake tools over time
The development of deepfake technology has been rapid, evolving from rudimentary and easily detectable alterations to highly sophisticated and nearly indistinguishable fabrications. The early versions of deepfake technology were largely experimental, created by tech enthusiasts and researchers as proof-of-concept projects. However, with advancements in machine learning algorithms, especially in generative adversarial networks (GANs), deepfakes have become far more accessible and effective.
Initially, deepfake tools required significant technical expertise and computational power, but today, some user-friendly applications and platforms allow almost anyone to create deepfakes with minimal effort. This democratisation of technology has raised serious concerns about its potential for misuse, as the barriers to creating convincing fake content have been significantly lowered.
Common uses and potential for misuse
While deepfakes can be used for benign purposes, such as in entertainment, satire, or even educational content to resurrect historical figures, their potential for misuse is vast and alarming. In the political arena, deepfakes pose a particularly severe threat. They can be used to spread misinformation, defame political opponents, or incite unrest by creating fabricated videos of candidates making inflammatory statements or engaging in unethical behaviour.
Beyond politics, deepfakes have been used in cybercrime, such as creating fake audio or video to deceive individuals or organisations into transferring funds or sensitive information. They have also been employed in malicious activities like non-consensual pornography, where the faces of individuals are superimposed onto explicit content without their consent.
As deepfake technology continues to advance, its potential for both creative use and harmful exploitation will likely grow, making it a critical issue for governments, businesses, and individuals alike to address.
Case Studies: Deepfakes and past elections
The rise of deepfake technology has had significant implications for electoral processes around the world. As technology has evolved, so too has its potential for misuse in the political arena, where it can be employed to mislead voters, manipulate public opinion, and even alter the outcomes of elections. Here are a few notable examples of deepfake usage in recent global elections:
2019 Indian general election: Misleading political videos
During the 2019 Indian General Election, deepfakes were used to create misleading videos that circulated widely on social media platforms. One particularly infamous case involved a manipulated video of a prominent political figure, where the audio was altered to make it appear as though the individual was speaking in a different language to appeal to various linguistic demographics. This deepfake was designed to resonate with a broader audience and was widely shared, contributing to confusion and division among voters. The incident highlighted the ease with which deepfake technology could be used to exploit regional and linguistic differences, thereby influencing voter behaviour.
2020 US Presidential Election: Concerns about misinformation
The 2020 US Presidential Election saw a heightened awareness and concern about the potential use of deepfakes for misinformation. Although there were no confirmed high-profile deepfake incidents directly impacting the election, the fear of such manipulations led to widespread discussions about the potential threat. The election period was rife with misinformation and disinformation campaigns, some of which utilised AI-driven technologies to create deceptive content. The mere possibility of deepfake use created an environment of distrust, with voters and media outlets increasingly wary of the authenticity of the content they encountered.
Brazilian Presidential Election: Manipulation of candidates’ images and speeches
In Brazil, the manipulation of candidates’ images and speeches through deepfakes has been a growing concern, particularly in the lead-up to presidential elections. During recent elections, there were instances where deepfake videos were created to distort candidates’ positions or portray them in a negative light. For example, videos were circulated that appeared to show candidates making inflammatory or controversial statements, which they had never actually made. These deepfakes were often spread through social media, reaching large audiences and contributing to the polarisation of the electorate.
Mechanisms of spread: How deepfakes could influence voters
Deepfakes have the potential to significantly influence voter behaviour, primarily due to the mechanisms through which they can be spread and the psychological factors that make them particularly persuasive. In the context of the 2024 US general elections, understanding these mechanisms is crucial for recognising the threat they pose and for developing strategies to mitigate their impact.
Social media as a powerful tool for rapid dissemination
Social media platforms such as Facebook, Twitter, Instagram, and TikTok have become central to the dissemination of information, particularly during election cycles. These platforms allow content to be shared instantaneously with millions of users, often without sufficient scrutiny or verification. Deepfakes, with their highly realistic visuals and audio, can easily go viral, reaching a vast audience in a short period.
The algorithms that govern social media platforms are designed to promote engaging content, which often includes sensational or controversial material. This makes deepfakes particularly prone to widespread dissemination, as they are likely to generate high levels of engagement, including shares, comments, and reactions. Once a deepfake is in circulation, it can be difficult to contain, even if it is later debunked. The rapid spread of deepfakes on social media can therefore have an immediate and profound impact on public opinion, influencing voters’ perceptions of candidates and issues before accurate information can be provided.
The role of echo chambers and confirmation bias in spreading deepfakes
Echo chambers and confirmation bias are key psychological factors that contribute to the spread of deepfakes. An echo chamber is an environment where individuals are exposed primarily to information and opinions that reinforce their existing beliefs. On social media, users are often part of like-minded communities where their views are continually validated, creating a fertile ground for the spread of deepfakes that align with these views.
Confirmation bias, the tendency to favour information that confirms one’s preconceptions, plays a significant role in the acceptance of deepfakes. When individuals encounter deepfakes that support their political beliefs, they are more likely to believe and share them, even if the content seems implausible. This bias can lead to the rapid spread of deepfakes within echo chambers, where they are rarely questioned and often amplified.
For example, a deepfake that portrays a political opponent in a negative light may be readily accepted and shared by supporters of the opposing party, further entrenching divisions and misinformation. In this way, deepfakes can perpetuate and exacerbate existing polarisation, making it even more challenging for voters to engage in informed and rational decision-making.
Deepfakes: Legal and ethical challenges in the age of disinformation
The emergence of deepfake technology presents significant legal and ethical challenges, particularly in the context of elections. As deepfakes become more sophisticated and accessible, their potential to disrupt democratic processes raises urgent questions about how to regulate their use and address the ethical responsibilities of various stakeholders.
Current US laws regarding deepfakes and election interference
In the United States, the legal framework surrounding deepfakes is still evolving, particularly when it comes to their use in elections. Currently, there is no comprehensive federal law specifically targeting deepfakes, but several legal mechanisms could be applied to address the issue:
- State-Level Legislation: A few states have begun to pass laws specifically targeting deepfakes. For example, California and Texas have enacted laws that criminalise the creation and distribution of deepfakes intended to harm or deceive voters during an election. California’s law passed in 2019, prohibits the distribution of deceptive videos or images of political candidates within 60 days of an election, provided that the content is likely to mislead voters. However, these laws are limited in scope and enforcement, and they vary widely between states.
- Federal Election Laws: While there are no specific federal laws addressing deepfakes, existing election laws related to disinformation and election interference could potentially be applied. For instance, the Federal Election Campaign Act prohibits foreign interference in US elections, which could encompass the creation or distribution of deepfakes by foreign entities. Additionally, existing laws against defamation, fraud, and false advertising could be used to prosecute individuals or organisations that create or spread deepfakes with the intent to deceive voters.
- First Amendment Considerations: The regulation of deepfakes in the US must also contend with First Amendment protections, which safeguard freedom of speech. Any laws targeting deepfakes must balance the need to protect voters from deception with the constitutional right to free expression. This creates a complex legal landscape, where determining the legality of deepfakes may depend on the context in which they are used and their potential impact on the election.
Ethical Considerations for Political Campaigns, Media, and Technology Platforms
The ethical implications of deepfakes in elections extend beyond legal concerns, affecting political campaigns, media organisations, and technology platforms.
- Political Campaigns: Ethical political campaigning requires a commitment to truthfulness and transparency. The use of deepfakes by political campaigns, whether to promote a candidate or to discredit an opponent, would represent a profound breach of ethical standards. Campaigns have a responsibility to ensure that their messaging is accurate and not misleading. The use of deepfakes could undermine public trust in the electoral process and erode the integrity of democratic institutions.
- Media Organisations: Media outlets play a crucial role in informing the public and shaping voter perceptions. As such, they have an ethical obligation to rigorously verify the authenticity of the content they publish, especially in the context of elections. The spread of deepfakes through legitimate news channels could have a catastrophic impact on public trust in the media. Journalists and editors must be vigilant in identifying and debunking deepfakes, ensuring that their reporting is based on verified information.
- Technology Platforms: Social media and technology companies are at the forefront of the deepfake challenge, as their platforms are often the primary means through which deepfakes are disseminated. These companies face ethical dilemmas in balancing free speech with the need to prevent the spread of harmful and deceptive content. While some platforms have implemented policies to detect and remove deepfakes, the effectiveness of these measures varies. There is an ongoing debate about the extent of responsibility that technology companies should bear in monitoring and regulating content on their platforms.
The Role of International Law in Combating Deepfakes During Elections
The global nature of the internet means that deepfakes are not confined by national borders, raising important questions about the role of international law in combating their use in elections.
- Cross-Border Misinformation Campaigns: Deepfakes can be created and distributed by actors outside the jurisdiction of the country holding the election. This makes it challenging for national laws to effectively address the threat. International cooperation is essential in developing strategies to combat the use of deepfakes in election interference. This could include information-sharing agreements, joint investigations, and coordinated responses to the dissemination of deepfakes across borders.
- International Legal Frameworks: Currently, there is no international legal framework specifically targeting deepfakes. However, existing treaties and agreements related to cybercrime and election interference could be adapted to address the issue. For example, the Budapest Convention on Cybercrime provides a framework for international cooperation in combating cyber-related crimes, which could include the use of deepfakes to interfere in elections. Additionally, international human rights law, which emphasises the right to free and fair elections, could be invoked to advocate for the regulation of deepfakes.
- Global Ethical Standards: Beyond legal measures, there is a need for global ethical standards to guide the use of deepfake technology. International organisations, such as the United Nations or the International Telecommunication Union, could play a role in developing and promoting these standards. Establishing a global consensus on the ethical use of deepfakes, particularly in the context of elections, could help to mitigate their impact and protect the integrity of democratic processes worldwide.
Defending election integrity: Solutions for deepfakes
As deepfake technology continues to evolve, the threat it poses to the integrity of elections becomes more pronounced. To safeguard democratic processes, a multi-faceted approach is needed, encompassing technological solutions, policy and regulation, and public awareness and education.
Technological Solutions
- AI and Machine Learning tools for detecting deepfakes
Advanced AI and machine learning tools are at the forefront of the fight against deepfakes. These technologies can analyse video and audio content for signs of manipulation, such as inconsistencies in facial movements, unnatural audio, or pixel-level anomalies.
- Deepfake Detection Algorithms: Machine learning models trained on vast datasets of both real and fake content can be used to identify deepfakes with increasing accuracy. These models compare new content against known patterns of manipulation, flagging potential deepfakes for further analysis. Continuous updates to these models are essential to keep pace with the evolving capabilities of deepfake technology.
- Collaborative AI Initiatives: Organisations like DARPA (Defense Advanced Research Projects Agency) have launched initiatives such as the Media Forensics program, which focuses on developing automated tools to detect and verify media authenticity. By fostering collaboration between government agencies, tech companies, and academia, these initiatives aim to create robust defences against deepfakes.
- Blockchain and other verification methods for authenticating content
Blockchain technology offers a promising solution for content authentication, providing a secure and transparent way to verify the origin and integrity of digital media.
- Blockchain-Based Verification: By recording the creation and distribution of digital content on a blockchain, it is possible to establish an immutable record of its authenticity. This can help verify that a video or audio file has not been tampered with, as any alteration would be immediately apparent in the blockchain record.
- Digital Watermarking: Another approach involves embedding digital watermarks or metadata into content at the time of creation. These watermarks can be verified later to ensure the content has not been altered. Combining blockchain with digital watermarking enhances the ability to track and verify the authenticity of media throughout its lifecycle.
Policy and Regulation
- Legislative efforts to curb the use of deepfakes in political campaigns
To address the threat of deepfakes, governments need to implement clear and enforceable regulations.
- Federal and State Legislation: As discussed earlier, some states in the US have already enacted laws targeting the use of deepfakes in elections. Expanding these efforts at the federal level would provide a more uniform approach to combating deepfakes. Legislation should include strict penalties for creating or distributing deepfakes with the intent to deceive voters or interfere in elections.
- International Cooperation: Given the global nature of the internet, international agreements and cooperation are necessary to combat the use of deepfakes in elections. Countries should work together to establish common standards and protocols for detecting and mitigating the spread of deepfakes, particularly when they are used for election interference.
- Responsibility of social media platforms and news outlets in mitigating spread
Social media platforms and news outlets have a crucial role to play in preventing the spread of deepfakes.
- Content Moderation and Filtering: Social media companies must invest in advanced content moderation systems capable of identifying and removing deepfakes before they go viral. This includes using AI tools to scan uploaded content for signs of manipulation and flagging suspicious material for human review.
- Transparency and Labelling: Platforms should implement transparency measures, such as labelling content that has been flagged as a potential deepfake or providing context when content is known to be manipulated. This helps users make informed decisions about the content they encounter.
- Partnerships with Fact-Checkers: Collaborating with independent fact-checking organisations can enhance the ability of platforms to quickly identify and debunk deepfakes. These partnerships can also provide users with reliable information to counteract the effects of disinformation.
Public Awareness and Education
- Promoting media literacy among voters
Educating the public about deepfakes and other forms of digital manipulation is essential to reducing their impact.
- Media Literacy Campaigns: Governments, educational institutions, and non-governmental organisations should launch media literacy campaigns to help voters recognise deepfakes and other deceptive content. These campaigns should focus on teaching critical thinking skills, encouraging scepticism of sensational or out-of-context content, and providing practical tools for verifying the authenticity of media.
- Educational Resources: Developing and distributing educational resources, such as guides, videos, and workshops, can empower individuals to better understand and combat deepfakes. These resources should be accessible to people of all ages and backgrounds.
- Initiatives by governmental and non-governmental organisations to inform the public
Both governmental and non-governmental organisations (NGOs) play a vital role in raising awareness about the dangers of deepfakes.
- Government Initiatives: Governments can take the lead by funding public awareness campaigns, supporting research into deepfake detection, and collaborating with international partners to address the global nature of the threat.
- NGO Efforts: NGOs can complement government efforts by providing independent, non-partisan information to the public. Organisations focused on digital rights, election integrity, and media literacy can work to ensure that voters are informed and prepared to identify and resist deepfakes.
With a driving passion to create a relatable content, Pallavi progressed from writing as a freelancer to full-time professional. Science, innovation, technology, economics are very few (but not limiting) fields she zealous about. Reading, writing, and teaching are the other activities she loves to get involved beyond content writing for intelligenthq.com, citiesabc.com, and openbusinesscouncil.org