Musk shared a modified video with Harris’ voice, raising AI in politics concerns

 

With Election Day just three months away, worries about the ability of artificial intelligence to mislead are being raised by a modified video that purports to be Vice President Kamala Harris saying things that she did not say.

The movie went viral after tech entrepreneur Elon Musk posted it on his platform X on Friday night, without mentioning that it was first published as a spoof.

The video features many of the same images as a genuine ad that Harris, the likely Democratic presidential contender, published last week to launch her campaign. But the video replaces the voice-over audio with another voice that convincingly impersonates Harris.

The speaker in the video declares, “I, Kamala Harris, am your Democratic candidate for president because Joe Biden finally exposed his senility at the debate.” Because Harris is a woman and a person of color, it asserts that she is a “diversity hire” and that she has no experience with “running the country.” The “Harris for President” logo is still present in the video. Additionally, some real old clips of Harris are included.

“We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump,” stated Mia Ehrenberg, a spokesperson for the Harris campaign, in an email to The Associated Press.

The widely circulated movie demonstrates how, as the US approaches the presidential election, lifelike artificial intelligence (AI)-generated images, films, or audio samples have been used to both make fun of and mislead about politics. It reveals how, despite the increased accessibility of high-quality AI tools, there has been no federal action to far to restrict their use, with states and social media platforms mostly in charge of enforcing regulations on AI in politics.

As the US gets closer to the presidential election, the widely circulated film serves as an example of how lifelike AI-generated images, videos, or audio excerpts have been used to both make jokes and mislead about politics. It reveals how regulations governing AI in politics have mostly been left to states and social media sites, despite the fact that high-quality AI tools are now much more available and there has been no federal action to yet to control their usage.

The film also poses concerns about the proper way to handle information that straddles the boundary between acceptable and inappropriate uses of artificial intelligence, especially satirical content.

The person who first uploaded the film, Mr. Reagan, is a YouTuber. He has revealed that the edited video is a spoof on both X and YouTube. However, Musk’s post, which the platform claims has had over 123 million views, just has the laughing emoji and the words “This is amazing” as a caption.

Platform-savvy X users might be aware that you can see the disclosure by clicking through Musk’s post to the original user’s post. They are not instructed to do so in Musk’s caption.

As of Sunday afternoon, Musk’s post had not yet been given a label, despite suggestions from certain users of X’s “community note” feature, which provides context for messages. Users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” according to X’s regulations, which some internet users questioned whether his post would breach.

The policy makes an exception for memes and satire as long as they do not create “significant confusion about the authenticity of the media.”

Earlier this month, Musk supported Republican nominee and former president Donald Trump. Email inquiries for comment on Sunday were not immediately answered by Mr. Reagan or Mr. Musk.

After listening to the audio of the phony advertisement, two specialists in AI-generated media attested that a significant portion of it was produced by AI.

Digital forensics specialist Hany Farid of the University of California, Berkeley, was one of them and claimed that the video demonstrates the effectiveness of deepfakes and generative AI.

He wrote in an email, “The voice generated by AI is very good.” “The video is so much more powerful when the words are in Vice President Harris’ voice, even though most people won’t believe it.”

According to him, generative AI businesses that provide the public with access to voice cloning and other AI tools ought to do more to make sure their products aren’t misused or applied in ways that endanger people or democracy.

Contrary to Farid, Rob Weissman, co-president of Public Citizen, stated he believed a large number of people would be duped by the film.

Weissman stated in an interview, “I don’t think that’s obviously a joke.” Most people who see it, I’m sure, don’t think it’s a joke. While not excellent, the quality is adequate. And the reason most people will take it seriously is because it reinforces themes that have already been discussed about her.

The video is “the kind of thing that we’ve been warning about,” according to Weissman, whose organization has pushed for Congress, federal agencies, and states to govern generative artificial intelligence.

Other generative AI deepfakes might have attempted to sway voters with humor, misinformation, or both, both here and abroad. In 2023, phony audio recordings purported to be a candidate in Slovakia talking about intentions to rig a vote and boost beer prices a few days prior to the election. A political action committee’s satirical advertisement from 2022 in Louisiana had the visage of a mayoral candidate from the state overlaid on an actor who was playing a low-achieving high school student.

Congress has yet to approve laws on AI in politics, and federal agencies have taken only limited steps, leaving the majority of existing U.S. regulation to the states. According to the National Conference of State Legislatures, more than one-third of states have enacted legislation governing the use of artificial intelligence in campaigns and elections.

Aside from X, other social media firms have procedures in place to address synthetic and modified media published on their networks. Users of the video platform YouTube, for example, must disclose whether they utilized generative artificial intelligence to make videos or risk suspension.

Source link