What’s next for Google’s AI team?

In recent weeks, Google’s engineers have been working on an AI sprint. Now that a major launch has passed, CEO Sundar Pichai believes it’s time to get some rest.

On the “Google AI: Release Notes” podcast, which was broadcast on Wednesday, Pichai stated, “I think some folks need some sleep.” Hopefully, he continued, he and his teammates “get a bit of rest.”

With the introduction of its most recent AI model, Gemini 3, on November 18, Google is already approaching a $4 trillion market valuation. This year, its stock price has increased by about 70%, including a 12% increase after the release of Gemini 3.

Gemini 3 has gotten positive reviews. In a post on X this week, Salesforce CEO Marc Benioff claimed it represents a “insane” leap in reasoning, speed, and multimodal capabilities. He told ChatGPT that he is “not going back” after only “2 hours on Gemini 3.”

After years of losing the lead to ChatGPT maker OpenAI, the announcement reignited discussions about Google perhaps emerging as the new leader in the AI race.

According to Pichai, Google has been discreetly setting the groundwork for a long-term AI strategy for years.

“In 2016, I wanted the whole company to be AI-first,” added Pichai.

The foundation for the tech giant’s acceptance of AI was laid by the creation of Google Brain in 2012, the acquisition of DeepMind in 2014, AlphaGo’s triumph in the Chinese board game Go, and the introduction of its first tensor processing unit—its own internal processors, which it used to train Gemini.

When I saw all of that in 2016, it was obvious to me that another platform transition was on the horizon. According to the CEO, that was a full-stack wager on positioning Google as an AI-first company.

However, Pichai said that generative AI’s quick uptake offered the business an even greater potential, which is when Gemini was launched. According to him, the business boosted up its AI infrastructure, bringing together its DeepMind and Google Brain teams, and began moving even more quickly.

The fundamental concept, according to Pichai, is to adopt a “full-stack” approach to innovation by enhancing everything from infrastructure to boosting the models’ pre-training, post-training, and test-time computation.

However, Pichai noted that this method of innovation takes time. He said that when Google originally attempted to meet the generative AI moment, it lacked capacity and needed to make investments in a number of areas to “get it to the scale.”

“If you looked from the outside, it would appear that we were silent or behind, but we were putting all of the building blocks in place before executing on top of it,” he explained.

The tide has subsequently turned.

“We’re on the other side now,” he explained.

Source link