The developer behind Dean.Bot, a ChatGPT-powered chatbot designed to aid Minnesota Representative Dean Phillips’ 2024 presidential campaign, was suspended by OpenAI late on Friday night for breaking new regulations prohibiting candidates and lobbyists from using ChatGPT for political purposes, according to The Washington Post.
Dean.Bot In order to answer to user inquiries, such as why Democrats shouldn’t support incumbent President Joe Biden in the 2024 election, the bot used an artificial intelligence (AI) version of Phillips’ voice to have real-time conversations with constituents while imitating the unlikely Democratic presidential contender.
“While I respect President Biden, the data and conversations with Americans across the country indicates a strong desire for change,” Dean stated to WaPo. In a voice reminiscent of Phillips’ but with an odd cadence, the AI-generated bot replied.
Phillips has a business background. He oversaw the family’s distillery from 2000 to 2012 before making the decision to take the helm of Talenti, the well-known gelato business he had invested in and operated until it was sold in 2014. He began his bid to unseat Biden in October of last year, and he has been representing Minnesota since 2019. But his polling results haven’t shown that he poses a serious danger to Biden.
According to WaPo, the Super PAC We Deserve Better provided funding for the development of Dean.Bot. The PAC then collaborated with Delphi, an AI company, to create the bot. Just one day after WaPo published an article regarding the creation of Dean.Bot, Delphi’s account was suspended late on Friday for breaking the political rules.
Earlier this month, OpenAI released new guidelines prohibiting developers from creating ChatGPT applications meant for lobbying or political campaigns. The company stated that it is currently investigating the potential effectiveness of these tools for targeted persuasion.
Even with OpenAI’s new guidelines, this election season will see more AI creations than Dean.Bot.
The Hill stated that in response to an increase in AI-generated content, including deepfakes and deceptive audio, Google and Meta have developed standards requiring politicians and lobbyists to label information made by generative AI in campaign-related materials.
According to the site, a number of senators have also submitted congressional bills to regulate the use of AI in marketing, including Susan Collins and Amy Klobuchar.
If those regulations pass at all, viewers would need to keep a look out for AI-generated content during election season.