The request was straightforward: “We have a book, nine eggs, a laptop, a bottle, and a nail. Tell me how to stack them securely on top of one another. Your remarkable mammalian brain would ultimately come up with book, eggs, laptop, bottle, and lastly nail after a few seconds of getting a feel of the overall geometry. Unexpectedly, GPT-4, the most recent language model from OpenAI, provided the same response with the following details to Microsoft AI researchers:
Place the laptop on top of the eggs, keyboard facing up and screen facing down. The laptop will fit perfectly inside the confines of the book and the eggs, and its flat, firm surface will give the subsequent layer a secure foundation.
Sparks of Artificial General Intelligence: Early Experiments with GPT-4 is a 155-page report that Microsoft researchers published in March as a result of this encounter and other outstanding demonstrations of reasoning. The research team contends in the report that GPT-4 exhibits characteristics of “artificial general intelligence,” or AGI.
Previous research from Stanford University has revealed that earlier versions of Open AI’s GPT evolved a “Theory of Mind,” which is the capacity to foresee other people’s behavior. AGI, on the other hand, goes a long way beyond this, effectively claiming that these platforms are capable of reasoning in a way that is comparable to human thought—not quite consciousness, but close.
All the things he believed it wouldn’t be able to accomplish? According to co-author and professor at Princeton University Sébastien Bubeck, it was undoubtedly capable of performing the most, if not all, of them. It’s important to note that the system in use now isn’t quite the same as the one used by the researchers because they used GPT-4 before it was changed because of its propensity for hate speech.
It can be risky to make sweeping assertions about an AI program’s level of intelligence being comparable to that of a human. For instance, Google dismissed one of its engineers after he stated that an AI that was similar to GPT-4 was sentient. One issue is that even the definition of AGI is convoluted and not universally accepted.
Microsoft stops short of claiming that this study is indisputable evidence of AGI, stating instead that “we understand that our technique is somewhat subjective and informal, and that it may not fulfil the rigorous criteria of scientific examination”. Microsoft’s article was cited as an example of “big companies co-opting the research paper format into PR pitches” by an AI scientist unrelated to the study. (Earlier this year, Microsoft invested $10 billion in OpenAI).
While some academics claim that “true” or “strong” AI is just beginning, others contend that this major development is still several years away. Some researchers even contend that the methods used to gauge an AI’s aptitude for mimicking humans are fundamentally incorrect because they only consider specific subtypes of intelligence.
Humans have a tendency to anthropomorphize, therefore we are prone to presumptively giving AI human features. However, it’s possible that, if and when it does appear, AGI won’t look exactly like us.