The Cause behind the AI Boom

There are several systems that seem to have little in common when it comes to artificial intelligence. AI can occasionally be used as a math instructor, chat partner, illustrator, or facial recognition tool. However, in each and every iteration, it is a machine that requires almost inconceivable amounts of energy and data to operate.

Buildings full of silicon computer chips house AI systems like ChatGPT. More resources are required in order to develop larger machines, as desired by tech companies such as Microsoft, Google, Meta, Amazon, and others. And there aren’t enough of them on our globe.

Over the past 10 years, the processing power required to train the best AI programs has doubled every six months, and it may soon reach an unsustainable level. A recent study estimates that by 2027, AI systems might use almost as much electricity as Sweden. It was estimated that GPT-4, the most powerful model that OpenAI now offers to customers, required 100 times more work to train than GPT-3, which was only released four years prior. With the recent addition of generative AI to its search function, Google may have doubled the cost of a typical search. In the meanwhile, electricity and the chips that power artificial intelligence (AI) are in limited supply; OpenAI CEO Sam Altman told Congress in May that “we don’t have enough” of them. If we keep going at this rate, there might not be enough energy in the globe in the near future to support more sophisticated models without severely taxing regional power systems. Even if there were, it would be too costly to buy all that electricity.

This won’t work because American tech corporations worship at the altar of scale—the faith that says adding more computers, power, and data to an AI software would inevitably make it better. To create hardware that would enable the technology to advance in strength and efficiency, a new AI competition has begun. This will determine the future of the technology—which businesses dominate, what AI products they can bring to market, and how pricey those goods will be—more so than eye-catching applications like OpenAI’s video-generator, Sora. The undisputed winner of that competition up to this point hasn’t been a well-known tech giant. Rather, it is Nvidia, a business that was unknown to everyone but serious computer gamers until about a year ago. Today, it is the third-most valuable company in the world, well outpacing Google, Amazon, and Meta.

Nvidia’s wealth stems from creating computer chips, which are the most important component of AI equipment. The chips that power chatbots, picture generators, and other artificial intelligence technologies are wafer-thin rectangles printed with a complex silicon web. Video game visual fidelity was originally thought to be improved by Nvidia’s graphics processing units, or GPUs. The same kinds of tools that enable a PC to produce more lifelike lighting in games like Call of Duty can also be used to train advanced artificial intelligence systems. These GPUs are some of the fastest and most dependable chips out there, and they enabled the AI revolution that is just getting started.

Tech corporations have together started an infrastructure build-out at a cost that may soon surpass both the interstate highway system and the Apollo missions in order to support the continuous rise of AI: Every year, cloud computing capacity is purchased for tens of billions of dollars, if not more. As a result, Nvidia holds up to 95% of the market for specialized AI chips. Without Nvidia’s hardware, it is likely impossible to develop or operate modern generative-AI software from Microsoft, OpenAI, Meta, Amazon, and other sources on computers worldwide.

ChatGPT was magical when it first launched. Every other tech company was racing to release their own version; Nvidia hardware significantly enabled the rivalry in software. With price being at least as significant a factor as capacity, the top three language models—OpenAI’s GPT-4, Google’s Gemini, and the most recent iteration of Anthropic’s Claude—are now performing virtually identically. According to Jai Vipra, an incoming fellow at IT for Change and an AI-policy researcher, the most expensive aspect of the technology is buying and powering all those AI chips. “Nvidia is the entity that sets the price,” it also says.

The Big Tech companies don’t seem to be happy about this dependence, and they’ve all started making significant investments in creating their own proprietary processors, which would allow them to have more control over their developing AI businesses in addition to being able to create larger models. According to Siddharth Garg, an electrical engineer at NYU who designs machine-learning hardware, having better computer chips may eventually prove to be a more significant competitive advantage than having better computer code. Most importantly, internal AI chips might be customized to a business’s specific AI models, improving the efficiency of its goods and allowing expansion without the need for as much energy.

This approach has been used in the past by tech companies. The reason your everyday Google searches, translations, and navigation requests function well is that in the 2010s, Google created specialized computer chips that made it possible for the corporation to handle billions of requests daily with less energy and money. Apple was able to create a faster, lighter, and smaller MacBook nearly immediately after switching from Intel to its own computer processors in 2020. Likewise, consumers may choose Amazon’s cloud services over Google’s if its proprietary chips enable AI devices to operate more quickly. More people might purchase an iPhone, Google Pixel, or Microsoft Surface tablet if they can run a more potent generative-AI model and load its results a little quicker due to a special microprocessor. According to Garg, “that’s a game changer.”

Every business aspires to have its own independent kingdom free from the demands of external supply chains or rivals’ rates. It remains to be seen, though, if any of these cloud computing tech firms will be able to compete with Nvidia, and it is highly doubtful that any of them will cut their ties to the company. It’s possible that both Nvidia’s designs and custom computer chips will be used in the future.

According to Myron Xie of SemiAnalysis, a semiconductor research business, Google, for example, has operated its flagship Gemini models using less energy and money by using bespoke computer processors instead of Nvidia. Nonetheless, Nvidia chips power a large number of the business’s cloud servers, and Google designed Gemma, its most recent language model, to operate on Nvidia GPUs. Vice president of Compute and Networking at Amazon Web Services David Brown told me via email that computer chips are “a critical area of innovation” and that the company touts its proprietary AI chips as “delivering the highest scale-out ML training performance at significantly lower costs.” However, Nvidia and the company are expanding their collaboration. As to an official statement from Microsoft, “Our proprietary chips enhance our systems instead of substituting our current NVIDIA-powered hardware.”

This idea of an all-inclusive AI ecosystem might potentially be a means of enticing clients. It’s more convenient to use iMessage, iCloud, an Apple Watch, and other services if you own an iPhone and a Macbook. The Google Gemini, Google Pixel, Google Chromebooks, Google’s proprietary AI processors, and Google Cloud services will all be optimized for each other. In the near future, AI may follow this same approach. AI “agents” that can automate operations on a variety of devices are apparently being developed by OpenAI. Furthermore, Apple has changed its focus to generative AI. According to Sarah Myers West, managing director of the AI Now Institute, it’s akin to locking individuals into your stack through vertical integration.

Tech firms are making significant investments in the development of more energy-efficient software and renewable energy sources in addition to chip design. In January, Altman stated at the World Economic Forum, “We still don’t appreciate the energy needs of this technology.” Without a breakthrough, there is no way to get there. Therefore, increasing efficiency might include more than just making AI environmentally friendly. They could be required in order for the technology to even be commercially and physically feasible.

Source link

Most Popular