Home Back

Nvidia's moat will be tested by a potential AI chip IPO

Businessinsider 2 days ago
Jensen Huang presenting chips onstage. Angle down icon An icon in the shape of an angle pointing down.
Jensen Huang presents at Nvidia's GTC conference in 2024 Justin Sullivan/Getty Images
  • Nvidia's software is what makes its products so sticky in the white-hot market for AI chips. 
  • An investor in rival chipmaker Cerebras says 10% of the market is up for grabs.
  • Cerebras is reportedly headed toward IPO at the end of this year. 

How wide is Nvidia's moat? This is the $3 trillion question on the minds of investors these days. At least part of the answer could come in the form of an IPO later this year.

Cerebras Systems, a startup trying to challenge Nvidia on the AI chip battlefield, is reportedly headed for an IPO toward the end of 2024.

Lior Susan, founder and managing partner of Eclipse Ventures, first backed Cerebras in 2015 when the company had five presentation slides and a theoretical plan for a new computer architecture.

Eight years later, the startup offers special, large chips with huge amounts of memory that are suited to generative AI workloads such as model training and inference. These go up against Nvidia chips, including B100s and H100s.

CUDA = annoying

But the most "annoying" thing about competing against Nvidia is CUDA, according to Susan.

CUDA is a software layer built by Nvidia to help developers wrangle and direct its graphic processing units. The platform has million lines of code that save developers time and money and at this point are a default in much of the AI ecosystem.

Cerebras has its own software that works with the startup's chips. But even well-designed alternatives are years behind CUDA. With developer knowledge and habits already established, this lead will be hard to break.

"I personally totally underestimated the CUDA portion of selling chips," Susan said. "You come for the hardware. You stay because of the software."

"As technologists, we always like to cry that we don't like something, but then we're continuing to use it. Because there's nothing better than that," he added.

The meaning of CUDA

Umesh Padval, a semiconductor industry veteran and managing director of Thomvest, called CUDA a fortress. Taking off slowly in 2007 and snowballing in recent years, around five million developers write CUDA code, add usable data to the canon, troubleshoot bugs, and support each other.

Over time, Nvidia has also layered more tools and assets on top of CUDA. Things like libraries of words and training data that startups can tap so that they're not starting from scratch every time they decide to point the powers of the AI revolution at a new use case. Modulus, for example, is a library that helps AI understand physics.

Related stories

"He's got now millions of software developers who know the language and have been using it for ages," Padval said of Nvidia CEO Jensen Huang. "It will be solved sometime, but it's a big moat."

Getting across this CUDA most is the key. Internal Amazon documents obtained by Business Insider offer one example. Amazon's Neuron software is designed to help developers build AI tools using AWS's AI chips, but the current setup "prevents migration from NVIDIA CUDA," according to the documents. This is one of the primary elements holding back even some AWS customers from using Amazon's AI chips.

To switch hardware, you must change your software

Anyone building an alternative to CUDA can't forget about it entirely. Startups can try to avoid it altogether if they build their tech from scratch with that intention. But, most AI developers must go through gnarly work to change their software if they want to change their hardware.

AMD, Nvidia's most direct competitor, has its own platform called ROCm. It comes with a tool called Hipify that translates CUDA software code into something more portable.

"You write a program in CUDA. To run that on AMD GPUs, you use the Hipify tool to switch," said Thomas Sohmers, cofounder and CEO of AI chip startup Positron.ai. "Frankly, I haven't seen anyone use Hipify."

Breaking CUDA's hold

In September, a group of AI luminaries and Nvidia competitors, including Qualcomm, Google, and Intel, formed the UXL Foundation to build a rival software platform that is designed to be chip-agnostic. The software won't be ready for action until the end of 2024. Other attempts to breach the CUDA fortress have fizzled.

Time and inertia are a powerful combination, though, as KG Ganapathi is happy to explain.

His startup Vimaan is building tech for a future "dark warehouse" that requires no humans to operate. It uses computer vision and machine learning to understand and catalogue the shape, size, quantity, and location of every item. Vimaan has received funding from Amazon's Industrial Innovation Fund and is part of Nvidia's Inception program for startups.

The Vimaan team has built its entire system in CUDA and Ganapathi has no interest in changing it now, even if there were clear reasons to do so.

"Am I gonna take a chance to switch out a whole infrastructure that we've built on the Nvidia platform?" he said. "Probably not."

What's up for grabs?

Still, Thomvest's Pavdal is confident that Nvidia customers want to reduce risk by diversifying their sources of GPUs and the supporting software. This means competitors will still be fueled by capital, along with product tests and purchases.

"Customers love the leader. But they also feel they want a second source so that they have choices," he said.

Since CUDA is arguably the most significant element of Nvidia's moat, long-term forecasts of the company's market share can offer an indication of how strong that hold is.

Susan, from Eclipse Ventures, said the market is so huge that taking even a small piece of it away from Nvidia is worth the fight.

"I say hey, my biggest competitor is worth $3.5 trillion, so you know, if I get 10% I'm a happy man," he said.

Read next

People are also reading