Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Nvidia Shares soared to nearly $1 trillion in extended trading on Wednesday after announcing a surprisingly strong outlook and CEO Jensen Huang said the company would have a “huge record year.”
The increase in sales is due to soaring demand for Nvidia’s graphics processors (GPUs), which power AI applications from Google, Microsoft, OpenAI, and others.
Fueled by demand for AI chips in data centers, Nvidia is on track to post $11 billion in revenue this quarter, beating analyst estimates of $7.15 billion.
“The ignition point was generative AI,” Huang said in an interview with CNBC. “We know CPU scaling is slowing down, accelerated he knows computing is the way forward, and a killer app has emerged.”
Nvidia believes there is a definite change in the way computers are made, which could lead to further growth. Data center components could even be a trillion-dollar market, Huang said.
Historically, the most important part of a computer or server has been the central processor, or CPU. The market was dominated by inteland AMD as its biggest rival.
With the advent of AI applications that require massive amounts of computing power, graphics processors (GPUs) have taken center stage, with state-of-the-art systems using as many as eight GPUs per CPU. . Nvidia currently dominates the AI GPU market.
“In the past, data centers were mainly CPUs for file searches, but in the future they will be generative data,” says Huang. “You’re not going to get data, you’re going to get some data, but most of the data has to be generated using AI.”
“In other words, instead of millions of CPUs, far fewer CPUs will be used, but they will be connected to millions of GPUs,” Huang continued.
For example, NVIDIA’s Proprietary DGX systemis essentially a training AI computer in a box, using 8 high-end Nvidia H100 GPUs and only 2 CPUs.
Google’s A3 supercomputer It combines eight H100 GPUs with one high-end Xeon processor from Intel.
That’s one reason why Nvidia’s data center business grew 14% in the first calendar quarter, while AMD’s data center division remained flat and Intel’s AI and data center division declined 39%.
Additionally, Nvidia’s GPUs tend to be more expensive than many central processors. Intel’s latest generation of his Xeon CPU could cost as much as $17,000 at list price. On the secondary market, one of his Nvidia H100 may sell for his $40,000.
NVIDIA will face increased competition as the AI chip market heats up. AMD has a competitive GPU business, especially in gaming, and Intel also has its own line of GPUs. The startup is developing a new class of chips dedicated to his AI, and mobile-focused companies are: Qualcomm And Apple continues to push this technology forward so that one day it can run in your pocket instead of a giant server farm. Google and Amazon are designing their own AI chips.
But for today’s companies building applications like ChatGPT, Nvidia’s high-end GPUs are still the chips of choice, as they are expensive to process and train terabytes of data, and generate data using models. It makes text, images, or predictions that are also expensive to run later in a process called “inference”.
Analysts say NVIDIA remains the leader in AI chips thanks to proprietary software that makes it easy to use all GPU hardware capabilities for AI applications.
Huang said Wednesday that replicating the company’s software is not easy.
“You have to design all the software, all the libraries, all the algorithms, integrate them into a framework and optimize them for the architecture, the architecture of the entire data center, not just one chip. ,” Huang said. he said on a conference call with analysts.