“The world is in peril,” said the former leader of Anthropic’s Safeguards Research team as he walked out. A researcher for OpenAI, who is also leaving, stated that the system has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
They’re part of a wave of artificial intelligence researchers and executives who aren’t just departing their companies; they’re publicly sounding the alarm on their way out, drawing attention to what they perceive as major warning lights.
While Silicon Valley is known for rapid turnover, the most recent churn occurs as market leaders such as OpenAI and Anthropic race toward IPOs, which might boost their growth while also drawing intensive scrutiny of their processes.
In recent days, a number of high-profile AI employees have decided to leave, with some expressly warning that their employers are moving too quickly and downplaying the technology’s flaws.
Zoë Hitzig, a researcher with OpenAI for the previous two years, announced her departure Wednesday in a New York Times column, citing “deep reservations” about OpenAI’s emerging advertising approach. Hitzig, who warned about ChatGPT’s potential for user manipulation, stated that the chatbot’s archive of user data, which includes “medical fears, relationship problems, and beliefs about God and the afterlife,” presents an ethical quandary precisely because people believed they were chatting with a program with no ulterior motives.
Hitzig’s criticism coincides with the tech news site Platformer reporting that OpenAI dissolved its “mission alignment” team, which was established in 2024 to further the company’s objective of guaranteeing that the pursuit of “artificial general intelligence”—a hypothetical AI capable of human-level thought—benefits all of humanity.
The leader of Anthropic’s Safeguards Research team, Mrinank Sharma, also announced his desire to leave the company in a mysterious letter posted on Tuesday, saying that “the world is in peril.”
Sharma’s letter included only oblique allusions to Anthropic, the company that created the Claude chatbot. He didn’t disclose why he was departing, but said it was “clear to me that the time to move on has come” and that “throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”
Anthropic expressed gratitude for Sharma’s efforts to advance AI safety research. The business stated that he was not the head of safety or in charge of broader measures at the corporation.
Meanwhile, two co-founders of xAI resigned in the span of 24 hours this week, announcing their resignation on X. That leaves only half of xAI’s founders working at the company, which is combining with Elon Musk’s SpaceX to form the world’s most valuable private company. Over the last week, at least five more xAI employees have announced their exits via social media.
The most recent xAI cofounders’ reasons for leaving were not immediately apparent, and xAI did not reply to a request for comment. xAI was “reorganized” to accelerate expansion, which “unfortunately required parting ways with some people,” according to Musk’s social media statement on Wednesday.
Although it’s common for top people to switch jobs in a new field like artificial intelligence, the volume of departures at xAI in such a short time frame is noteworthy.
The startup’s Grok chatbot, which was permitted to produce nonconsensual sexual photographs of women and children for weeks before the team intervened to stop it, has drawn criticism from all over the world. Additionally, Grok has a history of producing antisemitic comments in response to user-initiated queries.
The conflict between those researchers concerned about safety and senior executives keen to make money is highlighted by other recent departures.
The Wall Street Journal revealed on Tuesday that OpenAI had sacked a senior safety officer for opposing the introduction of a “adult mode” on ChatGPT that permits pornographic material. OpenAI terminated Ryan Beiermeister, the safety executive, on the grounds that she discriminated against a male employee, a claim Beiermeister told the Journal was “completely untrue.”
Her dismissal had nothing to do with “any issue she raised while working at the company,” OpenAI told.
Since ChatGPT’s launch in late 2022, high-level defections have been a feature of the AI narrative. The “Godfather of AI,” Geoffrey Hinton, quit his position at Google shortly after and started preaching about the existential dangers he believes AI poses, such as the potential for severe economic disruption in a future where many people “will not be able to know what is true anymore.”
Doomsday forecasts abound, particularly from AI CEOs with a financial interest to exaggerate the power of their own technologies. One of those forecasts went viral this week, with HyperWrite CEO Matt Shumer writing a nearly 5,000-word rant on how the latest AI models are already rendering some IT jobs obsolete.
“We’re telling you what already occurred in our own jobs,” the writer said, “and warning you that you’re next.”






