Nvidia is well-known for developing AI chips, but its most essential structure is a business barrier that keeps consumers in and competitors out. This barrier is made up of both software and silicon.
Over the last two decades, Nvidia has developed what is known in technology as a “walled garden,” similar to Apple’s. While Apple’s software and service ecosystem is geared toward customers, Nvidia has always focused on developers who use its chips to create artificial intelligence systems and other software.
Nvidia’s walled garden helps to explain why it is extremely improbable that it will lose significant AI market share in the upcoming years, especially in the face of competition from rival chip manufacturers and even digital behemoths like Google and Amazon.
It also explains why, in the long run, the battle for territory that Nvidia now controls will likely focus on the company’s coding expertise rather than its circuitry design—and why its competitors are racing to develop software that can bypass Nvidia’s protective wall.
Understanding Nvidia’s walled garden requires knowledge of the CUDA software platform. When it first introduced in 2007, this platform provided a solution to a previously unsolved problem: how to run non-graphics software, like as encryption algorithms and cryptocurrency mining, on Nvidia’s specialized chips, which were meant for labor-intensive applications like 3-D graphics and videogames.
Those chips, called graphics processing units, or GPUs, could now do a plethora of additional computations thanks to CUDA. Nvidia’s chips were able to run CUDA programs, including AI software, whose rapid development in the last few years has elevated Nvidia to the status of one of the most valuable firms in the world.
Furthermore—and this is crucial—CUDA was only the start. An enormous variety of operations could be completed on Nvidia’s GPUs at rates that were not feasible with traditional, general-purpose processors like those built by AMD and Intel due to the company’s yearly release of specialized code libraries in response to the demands of software developers.
The reason Nvidia has employed more software engineers than hardware engineers for many years is due to the significance of its software platforms. Nvidia’s concentration on combining hardware and software has been dubbed “full-stack computing” by the company’s CEO, Jensen Huang. This means that Nvidia manufactures everything from the chips to the software needed to develop artificial intelligence.
The systems that Nvidia’s customers have been using to create mountains of code for more than 15 years are up against every AI chips that a rival introduces in an attempt to compete with Nvidia’s. Switching from such program to a rival’s system can be challenging.
During its June shareholders meeting, Nvidia said that 3,700 GPU-accelerated apps, utilized by over five million developers across approximately 40,000 companies, are supported by CUDA, which now has over 300 code libraries and 600 AI models.
Numerous businesses have banded together to challenge Nvidia due to the market’s immense size for AI computing. Analyst Atif Malik of Citi Research, who specializes in semiconductors and networking equipment, predicts that by 2027, the market for chips related to artificial intelligence would grow to $400 billion. (Nvidia made roughly $61 billion in revenue in the fiscal year that concluded in January.)
According to Bill Pearson, an Intel vice president focusing on AI for cloud computing customers, a large portion of this collaboration is centered on creating open-source alternatives to CUDA. Two of these projects involve Intel engineers; Arm, Google, Samsung, and Qualcomm are involved in one. The business that created ChatGPT, OpenAI, is currently engaged in an open-source effort.
Startups trying to create CUDA substitutes are receiving a flood of investment capital. The potential for engineers at some of the largest IT companies in the world to work together to allow businesses to use any chips they choose and avoid paying what some in the industry refer to as the “CUDA tax” is one of the factors driving those investments.
Groq, a business that stands to benefit greatly from all of this open-source software, recently revealed plans to raise $640 million at a $2.8 billion valuation in order to develop processors that rival Nvidia’s.
Giants in the tech industry are also investing in Nvidia chip substitutes. Microsoft declared in 2023 that it will follow suit, with Amazon and Google producing their own unique processors for implementing and developing artificial intelligence.
AMD is one of Nvidia’s most effective rivals in the AI-chip market. Although AMD’s expected $4.5 billion in revenue from its Instinct line of AI chips in 2024 is still a small portion of Nvidia’s market share, the company is making significant investments to attract software engineers, according to vice president Andrew Dieckman of AMD.
They have significantly increased our software resources, he claims. AMD said last month that it would pay $665 million to acquire Silo AI, bringing on 300 more AI engineers.
Significant Nvidia users Meta Platforms and Microsoft both purchase AMD’s AI chips, demonstrating a desire to promote competition for one of the most expensive products in the budgets of both industry titans.
They have significantly increased their software resources, he claims. AMD said last month that it would pay $665 million to acquire Silo AI, bringing on 300 more AI engineers.
Significant Nvidia users Meta Platforms and Microsoft both purchase AMD’s AI chips, demonstrating a desire to promote competition for one of the most expensive products in the budgets of both industry titans.
However, for the next two to three years, Malik of Citi Research predicts that Nvidia will continue to hold a 90% market share in AI related chips.
It’s helpful to know what it takes to create an AI similar to ChatGPT without utilizing any Nvidia hardware or software in order to weigh the benefits and drawbacks of alternatives.
Babak Pahlavan, CEO of startup NinjaTech AI, claims he would have used Nvidia’s technology and software to create his company if he could have afforded it. However, shortages of Nvidia’s powerful H100 chips have kept costs high and availability difficult.
After a while, Pahlavan and his fellow entrepreneurs turned to Amazon, which creates its own unique chips for AI training—a process that involves these systems “learning” from enormous amounts of data. The team eventually trained their artificial intelligence (AI) using Amazon’s Trainium chips after months of work. It wasn’t simple.
Pahlavan, whose team at NinjaTech AI met with an Amazon software team four times a week for months, says, “There were lots of challenges and bugs.” Following a resolution of the problems between the two businesses, NinjaTech’s AI “agents,” which carry out tasks on behalf of users, debuted in May. The business states that over a million people use its service each month, and all of these individuals are catered to by models that have been trained and optimized using Amazon’s chips.
According to Gadi Hutt, an executive at Amazon Web Services whose team collaborated with NinjaTech AI, “there were a few bugs on both sides in the beginning.” However, he adds, “we’re off to the races” now.
Amazon’s bespoke AI chips are used by Anthropic, Airbnb, Snap, Pinterest, and other customers. Although Nvidia chips are available to Amazon cloud computing clients, using them is more expensive than using Amazon’s own AI chips. Customers would still need some time to transition, according to Hutt.
NinjaTech AI’s experience serves as an example of a major factor contributing to the hardship and extra time firms like its are having to invest in developing AI outside of Nvidia’s walled garden: expense.
NinjaTech pays Amazon over $250,000 a month for cloud services in order to support over a million users, according to Pahlavan. He continues, “It would be between $750,000 and $1.2 million if he were running the same AI on Nvidia chips.”
Nvidia is well aware of the pressure from the competition and the high cost of operating and purchasing its processors. Its CEO, Huang, has promised to lower the cost of training AI on the company’s hardware with the upcoming generation of AI-focused chips.
Nvidia’s destiny is largely up to inertia, the same kind of inertia that has historically kept companies and consumers confined to a number of other walled gardens. Including Apple’s.