Withdrawal demands for Sora rise over deepfake dangers

Thanks to artificial intelligence image-generation systems like OpenAI’s Sora 2, the tech industry is advancing quickly and disrupting things once more, this time with regard to humanity’s shared reality and ownership over our likeness before and after death.

The typical Sora video, created using OpenAI’s software and shared on Facebook, Instagram, X, and TikTok, is meant to be entertaining enough for you to click and share. Queen Elizabeth II may be rapping, or it could be something more commonplace and plausible. Fake doorbell camera film that shows something a little eerie, such a boa constrictor on the doorstep or an alligator approaching an unconcerned toddler, and concludes with a mild shock, like a grandmother yelling as she hits the animal with a broom, is a popular Sora genre.

However, an increasing number of professionals, academics, and advocacy groups are warning about the risks of allowing individuals to make AI videos on almost anything they can enter into a prompt, which might result in the spread of realistic deepfakes and nonconsensual photos amid a sea of less dangerous “AI slop.” OpenAI has halted AI creations of prominent celebrities doing bizarre things, including Michael Jackson, Martin Luther King Jr., and Mister Rogers, following a backlash from family estates and an actors’ union.

In a letter to the company and CEO Sam Altman on Tuesday, the nonprofit Public Citizen is now calling on OpenAI to take Sora 2 off the market, claiming that the app’s hurried release to get ahead of rivals demonstrates a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.” The letter claims that Sora 2 has a “reckless disregard” for the stability of democracy, people’s rights to their own likeness, and product safety. The U.S. Congress received the letter from the group as well.

Requests for response on Tuesday were not answered by OpenAI.

J.B. Branch, a Public Citizen tech policy advocate, stated in an interview that the threat to democracy is our greatest worry. I believe that people will no longer be able to trust what they see in the future. Additionally, we are beginning to witness political tactics where people recall the first picture or video that is published.

Branch, the author of Tuesday’s letter, sees wider issues about privacy that disproportionately affect disadvantaged communities online.

OpenAI prohibits nudity, but Branch claims that “women are seeing themselves being harassed online” in other ways, such as through fetishized niche material that gets past the applications’ limits. The news source 404 Media reported on Friday about a wave of Sora-created films of women being strangled.

More than a month ago, OpenAI unveiled its new Sora app for iPhones. It debuted on Android phones last week in the United States, Canada, and numerous Asian nations, including Japan and South Korea.

Hollywood and other entertainment interests, such as the Japanese manga business, have provided the most vocal opposition. Just days after its introduction, OpenAI revealed its first major revisions, stating that “overmoderation is super frustrating” for consumers but that it’s vital to be cautious “while the world is still adjusting to this new technology.”

Following that, on Oct. 16, the company publicly announced agreements with Martin Luther King Jr.’s family to prevent “disrespectful depictions” of the civil rights leader while it worked on better safeguards, and another on Oct. 20 with “Breaking Bad” actor Bryan Cranston, the SAG-AFTRA union, and talent agencies.

“That’s fine if you’re famous,” Branch explained. “It’s merely a tendency for OpenAI to respond to the fury of a small population. They are prepared to reveal something and then apologize. However, many of these difficulties may be addressed by design decisions prior to release.”

Similar criticisms have been made about OpenAI’s flagship product, ChatGPT. According to seven additional complaints that were submitted to California courts this week, the chatbot caused detrimental delusions and suicidal thoughts in persons who had no past mental health problems. The claims, which were filed on behalf of six adults and one adolescent by the Tech Justice Law Project and Social Media Victims Law Center, allege that OpenAI intentionally published GPT-4o too soon last year in spite of internal concerns that it was psychologically manipulative and dangerously sycophantic. Four of the victims committed suicide.

Public Citizen was not engaged in the litigation, but Branch sees similarities in Sora’s rapid release.

He stated that they are “pushing the pedal to the floor without regard for harm.” Much of this seems foreseeable. But they’d rather get a product out there, get people to download it, and get addicted to it than do the proper thing and stress-test these things ahead of time and concern about the fate of daily consumers.”

Last week, OpenAI responded to objections from a Japanese trade group that represents renowned animators such as Hayao Miyazaki’s Studio Ghibli, as well as video game developers such as Bandai Namco and Square Enix. OpenAI stated that many anime fans want to engage with their favorite characters, but the business has also put safeguards in place to prevent well-known characters from being made without the permission of the copyright holders.

“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people use Sora 2, including in Japan, where cultural and creative industries are highly valued,” OpenAI stated in response to the trade group’s letter last week.

Source link