Microsoft wants Congress to ban AI-generated deepfake fraud

Microsoft is urging Congressmen to control the application of deepfakes produced by AI in order to guard against manipulation, fraud, and abuse. Congress must act quickly to safeguard elections, protect the elderly from fraud, and protect children from exploitation, according to Microsoft Vice Chair and President Brad Smith.

Smith writes in a blog post that although the tech industry and nonprofit organizations have recently taken action to solve this issue, it has become clear that our laws will also need to change in order to prevent deepfake fraud. Enacting a thorough deepfake fraud legislation is among the most crucial things the US can do to stop cybercriminals from stealing from regular Americans using this technology.

Microsoft requests the creation of a “deepfake fraud statute,” which would provide law enforcement with a framework for bringing legal action against fraud and scams created by AI. Congressmen, according to Smith, should make sure that AI-generated content is covered by current federal and state laws on non-consensual intimate imagery and child sexual exploitation and abuse.

In a recent effort to combat sexually explicit deepfakes, the Senate passed a bill that gives victims of non-consensual sexually explicit AI deepfakes the right to file a damages lawsuit against the companies that created them. The legislation was enacted some months after it was discovered that middle and high school pupils were creating lewd pictures of their female peers, and after trolls had inundated X with sexual AI-generated photographs of Taylor Swift.

A vulnerability in Microsoft’s Designer AI picture maker made it possible for users to generate explicit photos of celebrities like Taylor Swift, prompting the corporation to add extra safety measures to its AI products. According to Smith, it is the private sector’s duty to develop and put in place measures that stop AI from being abused.

Generative AI makes it simple to manufacture bogus sounds, images, and video, something we’re already seeing in the lead-up to the 2024 presidential election, even though the FCC has already outlawed robocalls with AI-generated voices. Earlier last week, Elon Musk appeared to have broken X’s own guidelines against manipulating and creating synthetic media when he posted a deepfake video on the platform mocking Vice President Kamala Harris.

Microsoft desires posts such as Musk’s to be identified as deepfakes with clarity. Smith suggests mandating that AI system suppliers employ cutting-edge provenance gear for labeling synthetic content to Congress. Establishing trust within the information ecosystem is crucial, since it will facilitate the public’s comprehension of whether content is artificial intelligence-generated or manipulative.

Source link