The top executives of OpenAI are at odds over how to address the existential threat that artificial intelligence poses, as evidenced by Sam Altman’s potential leave of absence.
Altman has discussed the “critical” need to regulate AI and its “potentially scary” risks, the former CEO of OpenAI has primarily served as a poster child for the industry’s rapid advancement in AI.
In order to keep OpenAI ahead of the curve in the AI arms race, Altman was renowned for positioning the company aggressively and pursuing significant funding and quick development.
To run tools like ChatGPT, for example, Altman attempted to secure $1 billion in funding from Softbank in September. As previously stated, Mayoshi San, the CEO of the Japanese conglomerate, believes in the potential of artificial intelligence and uses ChatGPT on a daily basis.
Considering the possible risks AI poses to society, Ilya Sutskever, chief scientist, cofounder of OpenAI, and board member who was involved in Altman’s termination, opted to proceed with greater caution.
To make sure that GPT-4, the technology underlying ChatGPT, wouldn’t be hazardous to humans in the future, Sutskever established a “Super Alignment” team within the business.
The so-called effective altruism movement, which seeks to guarantee that advancements in artificial intelligence serve human interests, is also connected to two other members of the OpenAI board: technology entrepreneur Tasha McCauley and Helen Toner, a director at Georgetown University’s Centre for Security and Emerging Technology.
That would not be the first time that disagreements over the risks associated with AI have driven individuals out of the company, if their concern stemmed from Altman’s commitment to the effective altruism movement.
Dario Amodei and a few other OpenAI staff members departed the company in 2021 to form Anthropic, an OpenAI competitor whose primary goal is to create a safer AI.
Elon Musk, who resigned from OpenAI’s board in 2018 due to a conflict of interest with Tesla, expressed concern about the company’s level of safety prioritization.
The board of OpenAI has not provided any additional information regarding the reasons behind Altman’s termination, other than stating that they had “lost confidence” in him and that he had not been “consistently candid” in communications.
That abrupt dismissal, however, has had a rapid backlash. In protest, several executives resigned, and now a large number of OpenAI staff members are demanding that Altman come back.