It won’t be inexpensive to win the AI wars.
The head of AI at Google, who was giving a reality check to an audience in Vancouver on Monday, estimated that the search engine giant will spend over $100 billion on AI development.
Answering a query about what the competition was up to, Hassabis—who oversees Google’s renowned research division DeepMind and is perhaps the most significant figure at the core of Alphabet’s AI ambitions—shared the enormous amount.
A report published in The Information last month claimed that Microsoft and OpenAI were developing plans to build “Stargate,” a $100 billion supercomputer with “millions of specialized server chips” that would power the artificial intelligence (AI) of ChatGPT.
When Hassabis was questioned about the reported supercomputer of his competitors and its price, he immediately pointed out that Google’s expenditure might surpass that: “We don’t discuss our exact figures, but I believe that over time we’re investing more.”
According to data, the generative AI boom has already resulted in a massive increase in investment; AI firms alone raised about $50 billion last year. However, Hassabis’ remarks suggest that competition to lead the AI business will get much more expensive.
This is particularly true for businesses like Google, Microsoft, and OpenAI, which are vying with one another to be the first to build artificial general intelligence—that is, AI that can think and act as cleverly as a human.
Nonetheless, it’s shocking to consider that a single corporation might invest over $100 billion on a technology that some believe is overhyped.
Chunky chips
Nonetheless, it’s shocking to consider that a single corporation might invest over $100 billion on a technology that some believe is overhyped.
It’s important to think about the possible uses for such funds. To begin with, chips will account for a large portion of the development expense.
For businesses involved in the race to create AI with more intelligence, they represent one of the most costly investments. To put it simply, you can train AI models on larger amounts of data with more computational power available when you have more chips.
Chips from third parties like Nvidia have been crucial to companies working on large language models, such as Google’s Gemini and OpenAI’s GPT-4 Turbo. They are, nevertheless, making more and more attempts to create their own.
Also rising in cost is the model-training industry as a whole.
According to this week’s Stanford University annual AI index report, the expense of training cutting-edge AI models has risen to previously unheard-of heights.
In contrast to the $4.3 million utilized to train GPT-3 in 2020, it was reported that OpenAI’s GPT-4 required “an estimated $78 million worth of compute to train.” The training of Google’s Gemini Ultra, on the other hand, cost $191 million. In 2017, training the original AI models’ technology cost roughly $900.
In contrast to the $4.3 million utilized to train GPT-3 in 2020, it was reported that OpenAI’s GPT-4 required “an estimated $78 million worth of compute to train.” The training of Google’s Gemini Ultra, on the other hand, cost $191 million. In 2017, training the original AI models’ technology cost roughly $900.
According to the paper, the expenses associated with training AI models are directly linked to their computational needs; therefore, if artificial general intelligence (AGI) is the ultimate objective, the costs are expected to escalate.