The AI Election Battle: How Artificial Intelligence Could Sway Voters in 2024 and Beyond
December 5, 2023
With major elections taking place across the world in 2024, including the US presidential election and the European Parliament elections, political campaigns will be ramping up their efforts to target and persuade voters. Artificial intelligence and automated technologies are increasingly being utilized to micro-target messaging and ads, potentially influencing the democratic process. Here we explore some of the ways AI Election could be deployed to sway voters and what safeguards may be needed.
Political campaigns have long used voter data and profiling to segment the electorate and tailor their messages. However, advances in AI and the vast amounts of personal data now available are allowing for unprecedented levels of microtargeting and personalization at scale. Sophisticated algorithms can analyze thousands of data points about individuals, from demographic information and consumer purchases to online activity and social media profiles, to infer political values, personality traits and hot-button issues.
Campaigns can then automatically generate and disseminate highly customized ads, memes and social media content aimed at persuading specific voter segments. The messaging may emphasize different policy issues or even use different rhetorical styles and visuals depending on the predicted interests and psychological attributes of the targeted voter. For example, one ad testing well with empathetic female voters in suburban districts may focus on healthcare policy while another aimed at conservative rural men emphasizes national security.
SuchAI-powered microtargeting risks further segmenting the electorate and spreading misinformation, as campaigns seek to activate their base voters while suppressing turnout among opponents. There are also concerns about the lack of transparency around how voters are profiled and the potential inaccuracy of some of the inferred attributes that ads are targeted on. However, proponents argue that modern digital campaigning allows for more efficient communication of relevant issues to voters.
Deepfakes And Synthetic MediaAI Election
Another emerging threat is the potential use of deepfakes and other synthetic media to deceive voters. Deepfake technology, which uses AI to generate highly realistic but fake images and videos, is advancing rapidly. While most current deepfakes can be detected as manipulated, that may not remain the case for long.
Campaigns or other actors could deploy deepfakes showing opposing politicians in compromising or hypocritical situations. Even if exposed as fake, such content may still negatively shape public perceptions if it goes initially viral. Synthetic audio and text, artificially generated but indistinguishable from human-written pieces, also raise risks of spreading disinformation at scale.
However, some argue deepfakes also open up possibilities for positive campaigning. For example, synthetic media could be used to create compelling political ads without actual filming, reducing costs and carbon footprints. Overall, as the technology advances, deepfakes and associated risks will likely become an increasingly important issue for election integrity and transparency.
A related emerging threat is the rise of automated propaganda– AI systems that can mass-produce fake or misleading social media posts, comments and interactions at scale without human input. Such “bots” are already used to amplify certain political narratives and drown out opposing views. But AI may soon allow for far more sophisticated automated propaganda networks that can mimic real users, generate synthetic text and images, and even debate with humans online in a human-like manner.
If left unchecked, such systems could potentially be deployed for large-scale, targeted disinformation campaigns aimed at swaying public opinion and voter sentiment before elections. They may also be used to create the false impression of widespread grassroots support for certain policies or candidates. However, developing effective detection techniques remains an ongoing challenge given the rapid advancement of AI generation capabilities. International cooperation will be needed to regulate these technologies and their use to subvert democratic processes.
AI-Powered Campaign Strategies
AI is also being leveraged by campaigns for optimizing other aspects of their strategies. Machine learning algorithms can analyze massive troves of voter, issues and polling data to provide recommendations on policy positioning, campaign visits, and get-out-the-vote efforts. For example, AI may advise focusing ground efforts in districts where certain policy issues are most salient based on predictive modeling.
Campaign managers can also A/B test different slogans, images and messages using AI to optimize digital ads and fundraising appeals. Over time, such data-driven campaigning powered by AI could further min-max strategies to activate support bases while persuading undecided voters. However, it also risks “gamifying” democracy as campaigns prioritize metrics over meaningful civic engagement. There are open questions around transparency and oversight of such AI-powered optimization and whether it could undermine thoughtful policy debate.
Safeguarding Elections With AI
While AI poses new challenges for election integrity, it also offers opportunities to help safeguard the democratic process. AI and machine learning are being applied to detect and limit the spread of disinformation online. For example, social networks are developing techniques to identify inauthentic accounts and networks spreading propaganda at scale. AI can also help fact-check images, videos and text for signs of synthetic manipulation or deception.
Blockchain technology combined with AI shows promise for more transparent and verifiable online voting. Several jurisdictions are experimenting with “verifiable voting” systems that allow remote electronic voting while preserving ballot secrecy and providing tools for auditing results. AI may help automatically flag potential anomalies or inconsistencies in voter records and turnout patterns for investigation.
Looking ahead to 2024 and beyond, election authorities, technology companies, researchers and policymakers will need to work together to develop robust governance, oversight and safeguarding measures keeping pace with the evolving threats and opportunities posed by AI. International cooperation will be vital to regulate new technologies and curb their misuse to undermine open societies and democratic values across borders. With proactive measures and responsible development, AI could help strengthen rather than weaken election integrity in the digital age.