Voters in New Hampshire were discouraged from participating in the state’s primary election earlier this year after they got a phone call that sounded like President Joe Biden. But the voice on the other end of the phone was not actually Joe Biden’s; rather, it was a robocall designed to impersonate the president in an attempt to fool listeners into thinking it was.
The development of artificial intelligence (AI) has made it simpler than ever to produce videos, photos, and audio recordings that are altered to seem and sound real. Emerging technology poses a threat to spread misinformation online when an election draws near, potentially influencing public opinion, trust, and conduct inside our democratic system.
According to Mindy Romero, director of the Center for Inclusive Democracy (CID) at the USC Price School of Public Policy, democracies rely on informed people and residents who participate as completely as possible and voice their thoughts and wants through the ballot box.
It is feared that a decline in public confidence in democratic institutions might sabotage election procedures, increase polarization and instability, and serve as a cover for foreign meddling in politics.”
Elections in the Age of AI was the subject of a webinar Romero recently hosted. During the session, experts talked about how to spot misinformation created by AI and how governments can control the emerging technology.
Mekela Panditharatne, attorney for the Brennan Center’s Elections & Government Program, Jonathan Mehta Stein, executive director of California Common Cause, and David Evan Harris, Chancellor’s Public Scholar at UC Berkeley, were on the panel.
Here are some pointers and suggested policies to counteract misinformation created by AI:
How to spot misinformation and ignore it
- Be skeptical. Romero pointed out that being cautious of political news in general is not a negative thing. Red flags should be raised if the news seems off, sensationalized, or stirs up strong feelings.
- Check several different sources. Stein advised holding off on sharing if you come across a picture or video that unfairly insults a politician, supports a conspiracy theory, or makes an argument too precisely.
- He remarked, “We live in a time where you have to go double check information instead of believing it, retweeting it, or sharing it.” You will need to look it up on Google. Check to see if other sites are covering it. Check to see whether it’s been debunked.
- Make use of reliable news sources. One method of fighting misinformation is to consume information from reliable sources. Romero advised readers to ascertain whether a piece is news or opinion.
It might be challenging for individuals to defend themselves against false information. Romero said, “It’s a lot of work.” In this topic, the general trend is to discuss the ways in which policymakers and the government might support communities through action.
What policymakers can do
Policymakers in the United States should take a cue from Europe as they attempt to combat misinformation created by AI. According to Harris, the Digital Services Act of the European Union mandates that tech businesses operating sizable online platforms evaluate the potential harm that their goods may do to society, particularly to democracy and elections.
Following that, they must suggest ways to reduce risk and arrange for an impartial auditor to review both their risk assessment and mitigation plans, according to Harris. He mentioned that independent researchers must have access to data from digital companies in order to investigate how their products affect societal concerns, such as democracy and elections, as mandated by European legislation.
According to Stein, there have been numerous bills in California that try to control AI. One significant idea would force generative AI businesses to include data in the digital material they produce. Users of the internet would then be able to determine which photos, videos, and audio were produced using artificial intelligence (AI), as well as when and by whom. Additionally, social media companies would have to utilize that data to identify AI fakes, according to the bill.
According to Stein, anything that has been artificial intelligence (AI) generated when you’re browsing around Twitter, Facebook, or Instagram would have to bear a little label indicating that it was AI generated.
Panditharatne stated that federal legislation is being considered in Congress that would control the use of artificial intelligence (AI) in political advertisements and provide rules for local election offices regarding the effects of AI on disinformation, cybersecurity, and election administration. Election officials may find the guidelines on generative AI risk management released by the federal government to be helpful.
However, according to Panditharatne, they haven’t yet seen any kind of guidelines that are especially designed for election administrators to use AI. Therefore, in their opinion, that is a gap that needs to be filled.