AI’s Impact on the 2024 US Elections

Generative AI is expected to play a significant role in the upcoming 2024 US elections, giving rise to concerns about the potential consequences. Whether it’s through the use of chatbots or deepfakes, this technology has the potential to create a chaotic and confusing political landscape.

AI Regulation Efforts Hindered by Politics

Nathan Lambert, a machine learning researcher at the Allen Institute for AI and co-host of The Retort AI podcast with researcher Thomas Krendl Gilbert, believes that political factors will slow down AI regulation efforts in the US. He stated, “I don’t expect AI regulation to come in the US [in 2024] given that it’s an election year and it’s a pretty hot topic.” The outcome of the US election will greatly influence the narrative and determine the positions of candidates regarding AI regulation.

Potential Challenges with AI-generated Content

As the use of AI tools like ChatGPT and DALL-E becomes more prevalent in political campaigns, concerns regarding the spread of false and misleading information have been raised. A recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that 58% of adults believe AI tools will contribute to the dissemination of misinformation during the 2024 elections.

Instances of AI-generated content being used in political campaigns have already been observed. For example, Florida governor Ron DeSantis’ campaign utilized AI-generated images and audio of Donald Trump, as reported by ABC News. This raises concerns about the potential impact and manipulation of public perception through such technologies.

Responses from Tech Companies

In response to these concerns, some Big Tech companies are taking steps to address the potential misuse of AI tools. Google announced plans to restrict election-related prompts that its chatbot Bard and search generative experience will respond to leading up to the US Presidential election. Meta, which owns Facebook, has also stated that political campaigns will be barred from using new AI advertising products, and advertisers must disclose the use of AI tools to create or alter election ads on Facebook and Instagram.

OpenAI has implemented changes to improve the detection and elimination of disinformation and offensive content from its AI products, particularly ChatGPT. This comes in light of growing worries about the spread of disinformation in the upcoming elections.

However, challenges remain as Microsoft’s Copilot (originally Bing Chat) has been found to provide conspiracy theories, misinformation, and outdated or incorrect information. A recent report from Wired suggests that these issues with Copilot are systemic.

The Impact on Democracy

Nathan Lambert suggests that keeping generative AI information fully sanitized may be an impossible task in the context of the election narrative. Alicia Solow-Niederman, associate professor of law at George Washington University Law School, warns that the consequences of AI tools, whether through misinformation or overt disinformation campaigns, could have serious implications for democracy. She refers to the concept of “the liar’s dividend,” where the erosion of trust and the inability to distinguish truth from falsehoods undermines the electoral system and the ability to self-govern.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts