Warning Prank on OpenAI’s Offices

Thousands of paper clips were sent to OpenAI’s offices as part of a sophisticated practical joke by one of its main competitors.

An employee of rival Anthropic sent the paper clips shaped like OpenAI’s iconic spiral logo to the AI startup’s San Francisco offices last year, subtly implying that the company’s approach to AI safety could result in the extinction of humanity.

They alluded to the well-known “paper clip maximizer” scenario, a conjecture by philosopher Nick Bostrom that an AI tasked with producing the maximum number of paper clips might inadvertently exterminate humanity in the process of fulfilling its objective.

According to Bostrom, we should be cautious about wishing for superintelligence because we might receive it.

Former workers of OpenAI founded Anthropic after they disagreed on how to safely develop AI, prompting their departure in 2021.

With the record-breaking success of ChatGPT last year and a multibillion-dollar investment deal with Microsoft in January, OpenAI has since quickly expanded its commercial offerings.

However, with Sam Altman’s chaotic firing and subsequent reinstatement, the company’s worries about AI safety have returned in recent weeks.

According to reports, OpenAI’s non-profit board decided to fire Altman initially because of worries about the company’s rapid AI development and worries that this could hasten the arrival of superintelligent AI that could endanger humanity.

Ilya Sutskever, the chief scientist at OpenAI, participated in the board coup against Altman before abruptly joining calls for his reinstatement. Sutskever has been vocal about the existential risks that artificial general intelligence (AGI) could pose to humanity, and there have reportedly been disagreements between them on this matter.

Sutskever allegedly led OpenAI’s staff in chanting “feel the AGI” during the company’s holiday party after declaring, their goal is to make a mankind-loving AGI, and he also allegedly commissioned and set fire to a wooden effigy representing “unaligned” AI.

Source link