Deepfake and Election Security
Democracy, Elections, and Deepfake: The Conflict Between Nations and Global IT Companies
Summary
As the U.S. presidential election in November 2024 and the South Korean parliamentary election in April 2024 approach, fake news has emerged as one of the most critical issues in election campaigns. The World Economic Forum (WEF) in Davos 2024 identified deepfake-induced misinformation as one of the greatest threats to humanity. The impact of fake news on elections gained global attention during the 2016 U.S. presidential election. As IT, particularly artificial intelligence (AI), technology advances and influences election processes, election security has become a major political issue.
Election security was traditionally understood as protection against physical cyberattacks. However, new factors that influence voter behavior have emerged, with deepfake videos being one of the most pressing concerns. The sophistication of deepfake videos amplifies the effectiveness of fake news, distorting voters' ability to make informed and fair decisions.
Despite the severity of this threat, governments worldwide have failed to take proactive measures against deepfake-related election risks, instead shifting responsibility to major global IT companies. In contrast, these corporations—citing technological limitations and the immense resources required—have responded passively, failing to implement effective countermeasures. As the deepfake problem remains unresolved, voters continue to be misled, exacerbating tensions between governments and IT firms. In South Korea, enhancing election security requires regulatory legislation on deepfakes and strengthening the role of the National Election Commission.
Introduction
The 2016 U.S. presidential election highlighted the global impact of fake news, which became a key tool in election campaigns through social media. AI-powered technology has since escalated the issue, making election security a critical political concern.
With social media playing an increasingly dominant role in election campaigns, issues related to public opinion manipulation are no longer confined to domestic politics but have become sources of international conflict. A notable example is Russia’s alleged interference in U.S. elections, which primarily occurred online through email hacks, fake news, and deepfake videos. Given that such interventions spread rapidly via social media, their influence is profound.
Election security was initially understood in terms of cyberattacks targeting election infrastructure. The U.S. implemented online voter registration in multiple states under the Military and Overseas Voter Empowerment Act of 2009. However, cyberattacks have already disrupted elections worldwide, such as the 2011 Distributed Denial-of-Service (DDoS) attack on South Korea’s National Election Commission and the 2015 hacking of a Japanese local election database. These cases highlight the growing risks of cyber threats against voter registration systems, election websites, and voter databases.
More recently, election security threats have expanded beyond infrastructure attacks to include the manipulation of voter behavior. Fake news and misinformation significantly influenced the 2018 U.S. midterm elections and the 2019 European Parliament elections. With the rise of deepfake technology, fabricated videos have further jeopardized fair voter decision-making.
Therefore, election security must now encompass not only cyberattacks on election systems but also the spread of manipulated information that distorts voter choices. Deepfake videos, in particular, enhance the credibility of fake news, misleading voters and threatening the integrity of democratic elections.
Despite this growing concern, governments have failed to respond effectively, placing the responsibility on global IT corporations such as Google, Facebook, and Microsoft. However, these companies have largely remained passive, citing technological and resource constraints. As a result, voters continue to be misled, while conflicts between governments and tech firms remain unresolved. The increasing threats to democratic elections underscore the urgent need for institutional solutions to counteract deepfake-driven election manipulation.
This article explores the influence of deepfake technology on elections, the emerging phenomenon of digital gerrymandering, and the conflict between governments and IT firms. Finally, it proposes policy recommendations for South Korea.
The Rise of 'Algocracy': Social Media’s Influence on Elections
Social media platforms have become critical tools for influencing voter behavior. In 2010, Facebook conducted an experiment on 61 million users during the U.S. midterm elections to examine the platform’s impact on voter turnout. The study divided users into three groups and displayed different versions of an election reminder message. The results indicated that messages featuring friends who had voted had the greatest impact on voter turnout.
Similar experiments were conducted in Japan’s 2016 House of Councillors election. Facebook introduced a "Vote" button, allowing users to share their voting status, which increased election engagement. These experiments demonstrate how social media can manipulate voter behavior by leveraging algorithmic influence, raising ethical concerns about election fairness.
This phenomenon, known as "algocracy," refers to governance by algorithms, where AI-driven content curation impacts public decision-making. The combination of micro-targeting and AI-powered manipulation poses significant risks to democratic elections, as social media platforms shape voter perceptions and decisions in ways that may not always be transparent.
The Emergence of Deepfake Videos and Digital Gerrymandering
As the 2024 U.S. presidential election approaches, deepfake videos have become a major concern. Unlike traditional fake news, deepfake videos use AI to create highly realistic but entirely fabricated video content. The term "deepfake" combines "deep learning" and "fake," reflecting its use of advanced machine learning to manipulate video content.
Deepfake technology first gained attention in 2017 when AI-generated fake pornographic videos surfaced on Reddit. Since then, it has been widely used for face-swapping and creating deceptive political content. The implications of deepfake technology extend beyond elections, posing risks to national security and public trust.
Key factors driving the spread of deepfake content include:
- Post-Truth Politics: In an era where emotions outweigh facts in political discourse, deepfake videos can easily influence public opinion.
- Hyperconnected Society: With billions of social media users worldwide, fake content spreads rapidly, exacerbating its impact.
- Advancements in AI: Deepfake technology leverages Generative Adversarial Networks (GANs), enabling the creation of increasingly realistic videos that are difficult to detect.
Notably, in 2018, filmmaker Jordan Peele collaborated with BuzzFeed to produce a deepfake video of former U.S. President Barack Obama insulting Donald Trump. The viral video sparked widespread debate on the dangers of deepfake technology in politics. Since then, deepfake videos featuring world leaders such as Angela Merkel and Nancy Pelosi have circulated, raising concerns over election integrity.
Unlike traditional misinformation, deepfake videos manipulate both visual and auditory perception, making them more convincing and harder to detect. Given their rapid dissemination on social media, deepfake content presents a critical challenge to democracy and national security.
Deepfake and Election Security: The Government vs. Big Tech Conflict
1) U.S. Government vs. Global IT Companies
Following the 2016 Russian election interference, the U.S. Congress passed legislation in 2019 to enhance election security. The Deepfake Report Act requires annual reports on deepfake technology’s risks, while the Malicious Deep Fake Prohibition Act imposes criminal penalties for creating deceptive deepfake videos that influence elections.
Despite these efforts, the U.S. government continues to delegate much of the responsibility to major tech firms. However, companies like Facebook and Google have struggled to implement effective solutions due to technological and financial constraints. Although Facebook removed 3.4 billion fake accounts between 2018 and 2019, detecting and eliminating deepfake content remains a formidable challenge.
2) European Responses and Government-Corporate Conflicts
The European Union has taken a more proactive stance by incorporating election security into its Cybersecurity Act and General Data Protection Regulation (GDPR). However, specific deepfake regulations remain limited. France has introduced laws requiring transparency in online political advertising, while the U.K. has strengthened cybersecurity measures for political parties.
Conclusion: Policy Recommendations for South Korea
South Korea lacks comprehensive election security measures to address deepfake-related threats. While cybersecurity policies exist, they do not specifically target election security. The government must establish legal frameworks to regulate deepfake content, strengthen oversight of online political advertising, and enhance public awareness.
The fight against deepfake election interference requires a collective effort involving government, industry, academia, and civil society. As democratic nations worldwide confront this growing challenge, South Korea must take proactive steps to safeguard the integrity of its electoral system.
댓글
댓글 쓰기