AI's threat to democracy and electoral integrity

Published on 26 February 2025 at 21:13

HARVEY MANTTAN - YEAR 12

Democracy is flourishing worldwide, - 2024 was the largest election year in history in terms of eligible voters, yet the security and fairness of elections (in the west especially) is arguably at an unprecedented low.  At the verge of an imminent Artificial Intelligence (AI) revolution, we must focus attention on how AI may affect the democratic process, how to mitigate these problems, and perhaps more importantly, how democracy may benefit from new AI technologies. 

 

The world’s democratic institutions and processes are currently being transformed by the global struggle for AI; self-interested corporations are attempting to support, influence, and co-opt political outcomes in the hope of achieving a laissez-faire approach to regulation, most prominently seen in America’s recent election. Elon Musk’s controversial lottery, rewarding registered voters who signed a pro-gun charter (in order to mobilise Republican voters), is one such example. Attempts such as this, undermine the general fabric of safe and secure elections, allowing a trade-off of financial support  for political power (Musk spent $250 million on Trumps campaign and is now a highly ranked government official ), further enhancing a circular increase in non-state agents political and economic power. This means tech firms influence the very legislation that is meant to control them; The AI revolution is quickly becoming a tool for corporate agents to accumulate wealth and power without accountability, whilst furthering their own interests at the expense of democracy and fairness. 

 

AI poses an immense (and accelerating) risk on how we stage and vote in elections, through erosion of trust in political institutions, mass voter manipulation, and voter hysteria. Events such as AI generated robocalls of Joe Biden pushing voters to ‘save’ their votes for November in the New Hampshire primary could have a profound impact on voter turnout (and thus the final result). This coupled with near 60% of internet users remaining uncertain in identifying deepfakes means that even a few viral narratives and deepfakes could cause a ‘constellation of narratives’ to feed into each other leading to large scale voter hysteria. The DPRK, Russia and Iran have taken advantage of a 3,000% year on year increase in production of deepfakes, to stage “mis-, dis and malinformation campaigns” worldwide to influence voter behaviour and election results. These campaigns lead onto “perception hacking”, where false narratives about voter fraud and voting machines, as well as AI-boosted ransomware attacks, undermine electoral credibility and confidence leading to authoritarianism, populism, and extremism.  

 

 

Additionally, as AI becomes widely employed in the interpretation and analysis of data pools, it becomes easier for bad actors to push an agenda to specific groups, as well as for social media to function as an echo chamber for pre-existing biases. This poses a significant threat, when voters are engrossed in a one-sided political spectrum, they become increasingly susceptible to fake news, partisan violence and authoritarianism – as seen in the Capitol riots of 2021.  

 

Considering the full extent of these risks, the governance and mitigation of AI in an electoral setting must be prioritised. Introducing cyber and AI expertise into national, regional and local electoral infrastructure will help stem the tide of ransomware attacks and reduce the effects of malicious deepfakes. Election officials must be trained to minimise ‘data exhaust’ from their websites, navigate phishing emails and identify AI generated content. It is also essential that election officials are sufficiently capable in employing AI as a means for identifying anomalous results, thus reducing voter fraud13. These steps should strengthen confidence in the electoral process, democracy, and reduce harassment towards election officials. 

 

Mis-,dis and malinformation campaigns must equally be tackled; digital signing using Public Key Infrastructure in conjunction with content provenance techniques can be used to ‘watermark’ AI generated content, enabling users to differentiate between human and AI generated content. Bias and ethical auditing could help reduce AI-generated misinformation, known as ‘hallucinations’, improving the validity of AI content. There is also a clear need for transparency; repositories of AI produced deepfakes will help internet users more confidently identify fake news on social media, which provides over 30% of the world with their news. Finally, there is a pressing need to solve the ‘black box’ problem associated with social media targeting and AI algorithms; regular government audits and oversight of tech firms should help to address this issue, in addition to constraining the intensity of micro-targeting campaigns and problematic data collecting practises. 

 

However, AI has the potential to streamline and democratise the voting process; AI can be used to predict voter intentions and opinions through political polling, meaning parties are more responsive to voters wants and needs. This means political campaigns could transition further from vision oriented to poll oriented campaigning and policy making, which arguably strengthens democracy - but brings about concerns surrounding the need for long term, structured planning. AI will also improve voting accessibility, through text to speech and language translation services, meaning disabled people can more readily vote, levelling accessibility. 

 

Organisations such as ERIC -The Electronic Registration Information Centre, use AI to differentiate and identify voters in large data sets reliably (a task that has previously generated inconsistent and inaccurate results.) AI is similarly used for matching mailed ballot signatures and responding to basic voter questions - any mismatch or hesitancy will prompt human intervention to minimise errors and misinformation. Some officials are even planning to use AI in the production of election materials, although strong internal safeguards would be necessary for even remote viability. AI’s use in these labour intensive tasks, could allow redeployment of sections of the workforce to enhance electoral security- in a way the opportunities may balance out the risks. 

 

At the critical juncture of technological change – perhaps as, if not more important than the nuclear phenomena, the international community must carefully consider how we should regulate and control AI technologies; in order to balance the existential threat it poses to the fairness and security of elections, with the cultural renaissance it promises if properly regulated. As society begins the transition to a new AI led world order, it is of dire importance that the spirit of 20th century multilateralism does not fade; without global cooperation, weaponized and unbridled AI technologies would likely pose too insurmountable of threat to world democratic order. 

 

BIBLIOGRAPHY:

 

Global elections in 2024 - Statistics & Facts (Statista, 2024) 

https://www.statista.com/topics/12221/global-elections-in-2024/   

 

Science, the endless frontier of regulatory capture (Science Direct, 2024) 

https://www.sciencedirect.com/science/article/pii/S0016328721001695 

 

Fake Biden robocall being investigated in New Hampshire (AP News,2024) https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5 

 

2024: the election year of deepfakes, doubts and disinformation? ( Onfido, 2024) 

https://onfido.com/blog/deepfakes-and-disinformation/ 

 

Miller, Robert “Narrative Economics: How Stories Go Viral and Drive Major Economic Events”, Princeton University Press, 2020 

 

2024: the election year of deepfakes, doubts and disinformation? ( Onfido, 2024) 

https://onfido.com/blog/deepfakes-and-disinformation/ 

 

Homeland Threat Assessment 2024 ( US Department of Homeland Security, 2024) 

https://www.dhs.gov/sites/default/files/2023-09/23_0913_ia_23-333-ia_u_homeland-threat-assessment-2024_508C_V6_13Sep23.pdf 

 

Perception Hacks and Other Potential Threats to the Election (New York Times, 2024) https://www.nytimes.com/2020/10/28/us/politics/2020-election-hacking.html 

 

How tech platforms fuel U.S. political polarization and what government can do about it (NYU Stern Centre For Business and Human Rights, 2021) 

https://bhr.stern.nyu.edu/wpcontent/uploads/2024/02/NYUCBHRFuelingTheFire_FINALONLINEREVISEDSep7.pdf 

 

10 Gorman, Lindsay, and Levine, David. “Recommendations.” The ASD AI Election Security Handbook, German Marshall Fund of the United States, 2024, pp. 9–16. JSTOR, 

 

11 Zuboff, Shoshana “The Age Of Surveillance Capitalism: The Fight For a Human Future at the New Frontier of Power” ,Profile Books, 2019 

 

12 Artificial Intelligence and Election Security (Brennan Center For Justice, 2023) https://www.brennancenter.org/our-work/research-reports/artificial-intelligence-and-election-security 

 

13 Yamin, Khurram, Jadali, Nima, Nazzal, Dima  “Novelty Detection for election fraud: A case study with agent based simulation data” (John Wiley and Sons, 2023)  

https://onlinelibrary.wiley.com/doi/10.1111/exsy.13578 

 

14 AI and Cybersecurity in a Critical Election Year (Entrust, 2024) 

https://www.entrust.com/blog/2024/07/ai-and-cybersecurity-in-a-critical-election-year 

 

15 Crawford, Kate “Atlas of AI”, Yale University Press, 2021 

 

16 What Are AI Hallucinations?  (IBM) 

https://www.ibm.com/topics/ai-hallucinations 

 

17 How can we stop AI-enabled threats damaging our democracy? (Turing Institute, 2024) 

https://www.turing.ac.uk/blog/how-can-we-stop-ai-enabled-threats-damaging-our-democracy 

 

18 Social media news worldwide (Statista, 2024) 

https://www.statista.com/topics/9002/social-media-news-consumption-worldwide/#topicOverview 

 

19 Using AI for Political Polling –(Ash Center, 2024) 

https://ash.harvard.edu/articles/using-ai-for-political-polling/ 

 

20 Safeguards for Using Artificial Intelligence in Election Administration ( Brennan Center for Justice, 2023) 

https://www.brennancenter.org/our-work/research-reports/safeguards-using-artificial-intelligence-election-administration 

Add comment

Comments

Varun Chekuri
a month ago

Wow! what an interesting and informative topic!

Harvey Manttan
a month ago

Thank you Varun

Rory Wilson Rhodes
a month ago

Very informative read. Well done, Harvey.