Mark Zuckerberg, the CEO of Meta, stated on Thursday that his business is developing “general intelligence” for AI assistants and “open sourcing it responsibly.” To this end, Meta is combining its two main research departments, FAIR and GenAI.
In an Instagram Reel, Mark Zuckerberg stated, It’s becoming more evident that developing complete general intelligence is necessary for the next generation of services. We should open source and ethically make this technology as broadly available as we can so that everyone can benefit. It is so vital, and there are so many fantastic potential with it.
Interestingly, Zuckerberg did not refer to “artificial general intelligence” (or “AGI”) by name in his announcement, but it appears from a story from The Verge that he is moving in that direction. The term artificial general intelligence (AGI) is used to describe a somewhat nebulous technology that can accomplish general tasks without specialized training, much like human intelligence. OpenAI, a rival to Meta, has made this their official objective. Many people have expressed concern that it could endanger mankind or displace people in intellectually demanding employment.
You can argue whether general intelligence (AGI) is similar to human intelligence, human intelligence plus, or some sort of superintelligence from the far future. This is how Zuckerberg defined AGI in an interview. The range of intelligence, however, which includes all these many capacities where reasoning and intuition are required, is what really matters, in his opinion. He said that rather than happening all at once, AGI will happen gradually over time.
Business as usual?
The possible development of really broad AI seems like a casual business development, so there’s no need to worry too much, according to Zuckberg’s Instagram announcement. Well, it seems to be so harmless and helpful that they might even release it as open-source so that everyone can use it (as long as they do it safely, of course).
His words are currently in line with the tendency of downplaying the threat posed by AGI. Speaking earlier this week at the World Economic Forum in Davos, OpenAI CEO Sam Altman said that artificial intelligence (AI) “will change the world much less than we all think, and it will change jobs much less than we all think,” adding that the development of AGI might occur in the “reasonably close-ish future.”
As we realize that, in many aspects, large language models, as fascinating as they are, might not be fully ready for widescale trustworthy use, perhaps calmer heads will prevail now and certain expectations should be lowered. Moreover, as Chief AI Scientist at Meta Yann LeCun frequently says, they might not be the route to AGI.
During Zuckerberg’s introduction, he stated that Llama 3, the follow-up to Llama 2, is undergoing training and that Meta is assembling an enormous GPU capacity for the purpose of training and executing AI models. This capacity consists of 350,000 Nvidia H100s, or approximately 600,000 H100 equivalents of compute, if additional GPUs are taken into account.
Here is a transcript of Zuckerberg’s full statement in his Instagram Reel:
Hey everyone. Today, I’m bringing Meta’s two AI research efforts closer together to support our long term goals of building general intelligence, open sourcing it responsibly, and making it available and useful for everyone in all of our daily lives. It’s become clearer that the next generation of services requires building full general intelligence—building the best AI assistants, AIs for creators, AIs for businesses, and more—that means advances in every area of AI. From reasoning to planning to coding to memory and other cognitive abilities.This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that everyone can benefit.
And we’re building an absolutely massive amount of infrastructure to support this. By the end of this year, we’re going to have around 350,000 NVIDIA H100s, or around 600,000 H100 equivalents of compute, if you include other GPUs. We’re currently training Llama 3, and we’ve got an exciting roadmap of future models that we’re going to keep training responsibly and safely too.
People are also going to need new devices for AI, and this brings together AI and Metaverse. Because over time, I think a lot of us are going to talk to AIs frequently throughout the day. And I think a lot of us are going to do that using glasses, because glasses are the ideal form factor for letting an AI see what you see and hear what you hear, so it’s always available to help out. Ray-Ban Meta Glasses with MetaAI are already off to a very strong start, and overall across all this stuff, we are just getting started.