Elections Spotlight: Generative AI and Deep Fakes

Executive Summary

Since it burst on the scene just a short time ago, AI has impacted many facets of our daily life. In this article, we examine the potential impact of recent advancements in generative AI technologies on upcoming democratic elections. In particular, we look at two primary shifts: AI’s ability to craft persuasive, tailored texts for numerous individually targeted dialogues on a massive scale, and its proficiency in generating credible audio-visual content at low cost. Many observers are concerned that these developments risk detaching public discourse from factual, ideological debates, potentially undermining the very essence of democratic elections.

Highlights:

  • We discuss potential negative effects of AI-generated propaganda as well as some measures that can be taken to reduce the negative effects. This includes cooperation of multiple stakeholders, including AI providers, governments and the society.
  • We illustrate that overreacting to the threat can make the issue even worse and draw a parallel to past concerns over voting machine vulnerabilities.
  • We discuss how reaction to the distortion of political dialogue by AI could hinder public engagement in political discussions. We discuss that what is deteriorating is our trust in the public discourse, and unbalanced warnings might erode this trust further. This is the real danger – before new technology disengages us from reality, the fears and warnings about it might yield a hazardous outcome, namely, a loss of trust in the democratic apparatus

Figure 1 – AI generated image from a GOP advertisement on YouTube illustrating dysphoric conditions following Biden’s re-election.

US 2020 Presidential Elections and the Concerns of “Election Hacking”

Democratic elections rely on a few key elements: free public discussion – a public marketplace of ideas where conflicting views are confronted and weighed, a fair voting process, and a guaranteed peaceful transition of power, according to ballot results. Each one of these elements has faced challenges in recent years.

In the lead-up to the U.S. 2020 elections, there was heightened focus on the reliability and resilience of voting machines. Numerous conferences and public vulnerability tests took center stage. DefCon’s “Voting Village” initiative provided a platform for hackers to challenge the security of various voting machines from across the U.S., with any detected vulnerabilities often making headlines and receiving extensive news media coverage. In response to the rising concerns about potential electoral hacking, countries like the Netherlands, which had long embraced digital voting, reverted to paper-based manual processes. Meanwhile, in the U.S., the federal government allocated budgets in the billions to enhance the infrastructure of state voting machines. However, critics contended that such tests are conducted under non-realistic conditions. Notably, as of this time, there have never been any reports of attempts to hack voting machines with the intention of manipulating election results globally.

The reasoning behind this might be attributed to the intricacy involved. Manipulating election results is not a simple undertaking. It requires breaching multiple distributed offline systems and, crucially, doing so without detection. Any raised suspicions could immediately invalidate the count, as paper backups offer a reliable method for manually cross-checking the integrity of the vote. These challenges may explain why this has not, to the best of our knowledge, actually been attempted. But this does not mean that the democratic system was not under attack.

It is now evident that the primary goal of attacks on democratic election systems was not necessarily the promotion of specific candidates, but to undermine the democratic system itself. Influence campaigns aim to exacerbate divisions within the targeted society, often by supporting extremists from all sides, and to erode trust in the credibility of the voting system. Continuous public discussion and concerns regarding the reliability of voting machines serves exactly this purpose. Weakening the democratic system aids both the external and internal agendas of the attacking non-democratic forces. On the international front they weaken their adversaries, and on the domestic, they provide arguments for their own campaigns that emphasize the perceived flaws of Western democracy, thereby diminishing the local population’s aspirations for democratization. The underlying message is, “Look at those hypocrites in the U.S. with their flawed democracy – that is not our role model.”

In retrospect, the continuous efforts to uncover vulnerabilities in electronic voting machines, which were mostly meant to bolster their credibility and strengthen the democratic system, may have inadvertently had the opposite effect. It is possible that they actually eroded public confidence in the trustworthiness of these systems and the legitimacy of election results. While assessing the robustness of voting machines in artificial conditions, such as providing unrestricted access at events like “Voting Villages” can be beneficial for conventions, media exposure, and the cybersecurity industry, there are reservations. Many question whether the identified vulnerabilities are truly exploitable, if they are even targeted by adversaries, or if the potential risks associated with them outweighs the resulting decrease in public trust in election outcomes.

This perspective may have been what led CISA (the Cybersecurity and Infrastructure Security Agency) to rethink its communication strategy. Instead of persistently drawing attention to the vulnerabilities and shortcomings of voting machines, CISA chose to focus on positive messages throughout 2020, asserting the US voting system’s security and dependability. This change in CISA’s stance culminated in President Trump dismissing its director, Chris Krebs. While Krebs proclaimed these elections were the most secure in American history, President Trump vehemently disputed the election results, culminating in the January 6 events at the Capitol and interfering with a peaceful transition. The process has come full circle with DEF CON “voting village” officials now defining their mission, in light of the “Big Lie” theories, to, “fight against conspiracy theories, misinformation, claims of hacks that didn’t happen, claims of weirdness that didn’t happen”.

FOX Network recently agreed to pay $787 million as part of a lawsuit brought by Dominion, a voting machine manufacturer. FOX was sued for knowingly falsely claiming that Dominion machines altered the election results in favor of President Biden.

Artificial Intelligence and Automated Texts

As the next cycle of the US Presidential elections in 2024 approaches, there is mounting apprehension that advancements in AI can be used to introduce novel disruptions to the democratic process. Previous technology was primarily used in influence campaigns to curate and match specific content with targeted audiences. Today, the emerging wave of technology is increasingly adept at autonomously crafting tailored content, elevating concerns about its influence on public discourse.

The Cambridge Analytica scandal that erupted in 2018 was about the unauthorized use of social media users’ information. This information was utilized to build voter profiles and deliver content that aligned closely with the targets’ worldviews, making them more effective tools of persuasion. The bottleneck in these influence operations was the cost of content development, which required human content creators proficient in the targeted country’s language, culture, politics and psychology. However, new AI technology bypasses this bottleneck by offering cost-effective personalized content. Since the launch of ChatGPT in late 2022, it has become easy to automatically generate personalized text. Currently, you can set specific conversation parameters such as age, gender, and geographic location, along with objectives which are automatically fed into the API. This produces an impressive and convincing personalized text aimed at achieving the predefined objective. Output can be used to generate full conversations between AI and the targeted individual. Similar chatbots are already in commercial use, for instance, as efficient customer service tools.

The real breakthrough lies in the ability to produce this personalized content on a massive scale at a low cost. The quality of the text has improved to such an extent that historian Prof. Yuval Noah Harari calls it “hacking humans”, referring to AI’s capacity to anticipate and manipulate our feelings, thoughts, and choices beyond our own self-understanding.

In the context of elections, researchers proposed a thought experiment where AI systems, which they term “Clogger and Dogger”, operate on both sides of the political map to maximize the use of micro-targeting and behavior manipulation techniques. They warn that as a result, future political discourse might lose its significance. The discourse between bots and individual voters may no longer focus on relevant political issues at all. Instead, it can be used to divert attention away from an opponent’s message or other important discussion topics. In addition, it’s important to note that AI is current AI systems are not necessarily accurate but will generate content that matches the objectives it was assigned. In such a scenario, the winners of an election would have prevailed not due to their political stance or message resonating with the voters, but because of their financial ability to leverage this superior system for success.

In this way, without resorting to censorship or force, the primary requisite for democratic elections — a free marketplace of opinions — could be destroyed. As the newly elected officials were not voted in for their ideological platform, it stands to reason that they would not be committed to implementing these policies. They might even continue to employ “Clogger” or “Dogger” to make policy decisions that maximize their chances of being re-elected.

AI and Fabricated Audio-Visual Artifacts

 

One of the most significant implications of emerging AI technology is its capacity to fabricate deceptive voice and video recordings. This advancement threatens to blur the lines between genuine and falsified accounts of events. With such tools becoming increasingly accessible and affordable, there’s a growing concern that distinguishing between authentic reports and fabricated deep fakes will become nearly impossible, especially in the political arena. While initially used for scams like impersonating individuals for financial gains, this technology has already been weaponized politically. For example, during Chicago’s recent municipal elections, a doctored audio clip emerged, allegedly of candidate Paul Vallas, supporting police aggression. Other such fabrications, claiming to be Elizabeth Warren or Ukraine’s President Volodymyr Zelensky, were also found online. An AI-crafted video from the Republican National Committee, depicting fictitious dystopic scenes under Joe Biden’s rule, raises questions regarding its novelty: how is it different from previous Hollywood fabrications and their utilization for political campaigns. The real change, however, is the affordability and availability of these tech tools, heightening worries about their misuse in politics.

These recent advancements in AI, the ability to create personalized texts and engage in micro-targeted dialogues that could render discussions devoid of factual grounding, along with the creation and spread of deep fakes, have far-reaching implications for democratic

Figure 4 – AI fabricated images: The pope wearing a puffer coat and former US President Trump resisting arrest. Sources: Forbes, BBC

processes. They negate the first requirement of democratic elections – the existence of a free meaningful exchange of ideas and opinions. They strike at the very heart of human communication and our ability to have open and meaningful discussions prior to elections.

Tackling the Immediate AI Challenge

It is worth emphasizing that, unlike previous technological challenges for which the solution was technology-based, these new developments challenge something more fundamental: our relationship to truth itself. We are now grappling with complex issues that are more philosophical and moral than they are technological. Epistemological issues relating to the nature of knowledge, the reliability of data sources, the reputation of public figures, and the very way we shape our understanding of reality. These challenges are reshaping the landscape of political discourse, calling into question the authenticity of information, and demanding a fresh perspective on the core values of communication and integrity in the democratic process.

To effectively tackle these challenges, we need the cooperation of multiple stakeholders. First of all, AI providers must take proactive measures to prevent the abuse of its processes, and regulators need to reassess and update guidelines. As a society, we must engage in deep reflection on the nuances of freedom of expression and the way we conduct ourselves in this digital era.

The primary condition for evaluating information is understanding the context in which it appears. This involves understanding the information source, the speaker’s identity and motivations. Therefore, some proposals emphasize the necessity to reveal these details together with the publication. When it comes to conversations with bots, there’s a demand to declare the identity of the speaker as an AI chatbot. There’s also a call to clarify the AI’s intentions, possibly by revealing the prompt given to the AI agent. Fung & Lessig from the “Clogger” example propose something in the style of: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%”. Yuval Harari raises an issue related to the approval of AI participation in political discourse: The First Amendment guarantees freedom of expression to humans. Is this right also reserved for AI? However, as the integration of AI in thought processes (like trying to use ChatGPT in the creation process of this article, with very limited success) becomes more complicated, it becomes increasingly challenging to distinguish the origins of a text or idea. Regulation enforcement is critical as relying on an individual’s capacity to recognize source reliability was shown to be ineffective.

In the absence of a ban on AI participation in popular discourse, it’s evident that the responsibility for AI-produced content falls on the users and the developers.  The precise boundary will be clarified in the courts through a dialogue between regulators and those who interpret the regulations.

Another tension exists between the right to freedom of expression and the pursuit of truth. Until now, freedom of expression has been prioritized. From the Ten Commandments to American law, lying is not generally considered a crime. Tom Wheeler, ex-FCC chairman, highlighted the sad reality: deception is permissible, especially in politics. A recent example is a 2012 decision by the U.S. Supreme Court that led to the nullification of a law prohibiting the false representation of military history. A local candidate in California was caught claiming to have received a medal he had not, but the Supreme Court ruled that the First Amendment takes precedence, and in practice, one cannot prohibit lying in public discourse. Given the risk of blurring the boundaries between documentation and fabrication, it is worth reconsidering the balance between these two values.

Should someone who produces an audio file, image, or video intended to fabricate a false representation of reality bear criminal responsibility? The importance of trustworthiness in our public discourse is likely to take on an even more central role in the future. In a reality where it is increasingly challenging to identify forgeries, the source and context become paramount. In such a situation, the credibility of a person and information source becomes increasingly crucial. When the truth is endangered, deterrence is created through a decreased tolerance for lies and deceivers. Politicians who once prided themselves on openly lying, may soon be viewed differently. Credibility and a trustworthy reputation become valued assets and it is critical that there be political accountability for lying.

Media and Technological Solutions

In an era when traditional media grapples with the flourishing popularity of social media, there could be a potential pivot towards the former’s resurgence. As people seek reliable sources, they may seek out mainstream outlets that are more committed to diligent fact-checking, editorial standards, and accountability. Projects aiming to provide a broader context to news items will be increasingly important.

Multiple initiatives focusing on source identity might be the start of a new trend. WorldCoin, introduced by Sam Altman from OpenAI is a digital identification platform whose goal is to provide people with a way to verify that they are interacting with real humans. Despite the criticism directed at Twitter, its promotion of the blue checkmark and other verification mechanisms are steps in the same direction. Increasing the cost of signals to make them more difficult to fake, thereby increasing their reliability, could be an applicable solution here, too. A monthly fee for Twitter constitutes such a cost and could hinder the creation of networks of thousands of bots. Preference for real-life events over virtual ones, as we have already seen in the popularity of large-scale conventions as part of elections campaigns,  could also be considered as another way of increasing signal cost.

Exaggerated Warnings Could Make Things Worse

Until now, we discussed the potential dangers stemming from AI capabilities and the ways to mitigate them. However, there’s a possibility that the repeated warnings and emphasis on AI’s dangers might exacerbate the problem. Just as the overstated focus on the vulnerabilities of voting machines might have inadvertently weakened democratic resilience by eroding public trust in voting mechanisms, we could be facing a similar peril with AI.

Conclusion

It’s essential to keep things in perspective and remember that we’ve always grappled with incomplete information about reality, and that manipulations, rhetorical twists, and outright lies did not originate in the AI era. Edited portraits existed even during Abraham Lincoln’s time, and it’s highly likely that a 2023 news reader’s perception of current events is not worse than in earlier times when “news” was seldom up to date and verifying facts was much more challenging. A review of cases of actual AI misuse suggests it is still a long way from massively influencing our perception of reality and political discourse.

What is deteriorating is our trust in the public discourse, and unbalanced warnings might erode this trust further. In our opinion, this could prove to be the real immediate danger – even before new technology disengages us from reality, the fears and warnings themselves can create a loss of trust in the democratic apparatus. It’s crucial to stress that the field of public discourse, which never guaranteed absolute objectivity and fidelity to the truth, isn’t broken now either. However, eroding trust in electoral mechanisms and their outcomes may increasingly erode the losing side’s willingness to concede defeat, which in turn can affect a peaceful transition of power.

The post Elections Spotlight: Generative AI and Deep Fakes appeared first on Check Point Research.

Article Link: Elections Spotlight: Generative AI and Deep Fakes - Check Point Research