Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Nvidia has had a tumultuous journey on its journey to becoming a $3 trillion powerhouse in the AI industry, and now regulators want to know whether the company got there fairly.
French regulator A U.S. federal court plans to prosecute the Silicon Valley semiconductor giant over concerns it has engaged in anti-competitive behavior, Reuters reported, citing people familiar with the matter.
It is as follows: Last month’s trends The case, involving the U.S. Department of Justice and the Federal Trade Commission, is likely to force tough questions on AI industry giants like Nvidia and Microsoft about how they use their market power.
An Nvidia spokesperson declined to comment to BI.
Nvidia has emerged as a dominant force in the generative AI boom, with companies including OpenAI, Google and Meta all bowing to its billionaire CEO, Jensen Huang, to secure access to the chips, called GPUs, in which the company specializes.
Demand has been driven by the role these GPUs play in training popular AI models, and in May Nvidia released the latest metrics showing just how relentless demand remains. First quarter revenue up 262% year over year To $26 billion.
The company’s market capitalization briefly surpassed Microsoft last month to become the world’s most valuable company, further solidifying its dominance with a market capitalization of about $3.34 trillion.
But while Nvidia’s hardware has been getting all the attention, regulators also seem keen to focus on the software part of the company’s business: CUDA.
the first opinion In a report on the “competitive functioning” of the generative AI sector published on Friday after an investigation was launched in February, France’s competition authority expressed concern about “the sector’s reliance on Nvidia’s CUDA software.”
What is CUDA?
Nvidia’s CUDA software improves hardware usability.
Slaven Vlasic/Getty Images for The New York Times, Chelsea Jia Feng/BI
CUDA stands for “Compute Unified Device Architecture” and is a computing platform introduced by Nvidia in 2006.
At the time, Nvidia’s GPUs were developed to cater to a niche gaming market at the time, and boasted better gaming graphics processing power than competitors’ chips, thanks to a neat trick called parallel computing.
But Nvidia was ready to expand the use of its GPUs to handle other kinds of computing tasks, and that’s where CUDA comes in: Nvidia wanted to create a software package that would enable GPUs to handle a wide variety of computing tasks.
Success! The benefit of CUDA today is that it effectively functions as a plug-and-play system. No matter how diverse and complex an AI company’s workloads are, CUDA works in such a way that Nvidia’s GPUs can be useful to any company working on an AI project. How did this happen?
What makes Nvidia work?
Jensen Huang will be presenting at Nvidia’s GTC conference.
Justin Sullivan/Getty Images
After Nvidia’s GTC conference in March, which analysts dubbed the “Woodstock of AI,” James Wang, general partner at venture capital firm Creative Ventures, wrote a blog explaining why Nvidia’s new GPU announcements weren’t as important to the company’s success as CUDA.
He offers several explanations for this.
First, CUDA is adaptive: as new GPUs are released, the software will remain “forward and backward compatible.” Wang wrote in a Substack blog post..
Wang also noted that CUDA has a number of “very useful tools” that are supported by a dedicated community of CUDA developers. Put simply, these tools are designed and updated to make life easier for companies looking to use Nvidia’s chips.
“NVIDIA’s advantage is due to years and billions of dollars invested in the CUDA ecosystem, evangelism, and education of the community building AI,” Wang wrote.
Huang is credited in Silicon Valley with building powerful software systems that give Nvidia a competitive advantage, but other companies are trying to develop rival products.
Nvidia’s chip rival AMD, for example, is led by Huang’s cousin, Lisa Su, and runs a CUDA replacement called ROCm, but it was released in 2016, a decade after CUDA, and so has never attracted the same attention.
The question now for regulators is whether Nvidia gained an advantage by unfairly locking companies that use its GPUs into CUDA.
As the French regulator noted in its opinion on Friday, the software is “the only one that is 100% compatible with GPUs, which have become essential for accelerated computing.”