OpenAI has developed an internal scale to measure the progress of its large language models toward artificial general intelligence, or AI with human-like intellect.
Today’s chatbots, like ChatGPT, are at Level 1. OpenAI claims to be nearing Level 2, which is described as a system capable of solving basic problems at the level of a PhD-holder. Level 3 AI agents can take actions on a user’s behalf. Level 4 has AI that can generate new innovations. Level 5, the final step toward AGI, is AI capable of performing the tasks of huge organizations of people. OpenAI originally described AGI as “a highly autonomous system that outperforms humans in the most economically valuable tasks.”
Since achieving AGI lies at the heart of OpenAI’s distinctive organizational structure, it’s critical to understand how OpenAI defines AGI. The business commits to not compete with the project and to give its all to help “if a value-aligned, safety-conscious project comes close to building AGI” before OpenAI does. Although the wording of OpenAI’s charter is ambiguous and leaves flexibility for the for-profit corporation (run by the nonprofit) to make the final decision, a clearer benchmark that OpenAI and its rivals may use to gauge their progress could assist determine when AGI is achieved.
However, AGI is still a long way off; if it happens at all, it will require processing power valued at billions upon billions of dollars. Expert timelines differ greatly, as does the OpenAI timeline. In October 2023, OpenAI CEO Sam Altman stated, we will arrive to AGI in “five years, give or take.
Though it is still in the early stages of development, this new grading system was unveiled the day that OpenAI announced its partnership with Los Alamos National Laboratory. The partnership’s goal is to investigate how cutting-edge AI models like as GPT-4o may safely support bioscientific research. The objective is to test GPT-4o’s capabilities and establish a set of safety and other parameters for the US government, according to a Los Alamos program manager who was crucial in establishing the OpenAI cooperation and overseeing the national security biology portfolio. In due course, models that are either private or public can be assessed using these criteria to gauge their own performance.
After its head, OpenAI cofounder Ilya Sutskever, left the firm in May, the organization dissolved its safety team. Key OpenAI researcher Jan Leike quit soon after asserting in a post that the company’s “safety culture and processes have taken a backseat to shiny products.” While OpenAI disputed this, some are concerned about the implications if the corporation does achieve AGI.
OpenAI hasn’t revealed how it assigns models to these internal tiers. However, during an all-hands meeting on Thursday, business management showcased a research project utilizing the GPT-4 AI model, which they claim demonstrates some new talents that exhibit human-like reasoning.
This scale could assist establish a precise definition of progress rather than leaving it open to opinion. For example, OpenAI CTO Mira Murati stated in a June interview that the models in its labs are not significantly superior than those available to the general public. Meanwhile, CEO Sam Altman stated late last year that the company recently “pushed the veil of ignorance back,” implying that the models are far more sophisticated.