SAN JOSE — “I want you to understand that this is not a concert,” Nvidia President Jensen Huang said before an audience large enough to fill San Jose's SAP Center. This is how he introduced his GTC event at the company, which is perhaps the exact opposite of a concert. “You've come to a developer conference. There's going to be a lot of science explaining algorithms, computer architecture, mathematics. You feel a very heavy weight in the room. Suddenly you've come to the wrong place. It is.”
It may not have been a rock concert, but the 61-year-old CEO of the world's third most valuable company by market capitalization, wearing a leather jacket, certainly had quite a few fans in the audience. . The company was founded in 1993 by him with a mission to push the limits of general computing. “Accelerated computing” became his mantra for Nvidia. Wouldn't it be great if we could make specialized chips and boards instead of general-purpose? Nvidia chips give graphics-hungry gamers the tools they need to play games at high resolution, high quality, and high frame rates. We provide.
In some ways, Monday's keynote was a return to the company's original mission. “We want to show him the soul of Nvidia, the soul of our company, at the intersection of computer graphics, physics, and artificial intelligence, where everything intersects in a computer.”
Then, for the next two hours, fans did something unusual. He became a geek. difficult. Anyone who came to the keynote expecting him to give a Tim Cook-like, deft, audience-focused keynote will be disappointed. All in all, the keynote was tech-heavy, acronym-filled, and unapologetically a developer conference.
Requires larger GPU
Nvidia started with graphics processing units (GPUs). If you've ever built a computer, you're probably thinking of graphics cards that fit into PCI slots. That's where the journey began, and we've come a long way since then.
The company has announced an all-new Blackwell platform. This is truly a monster. Huang said the processor's core “pushes chip size to the limits of physics.” It uses the combined power of two chips to deliver speeds of 10 Tbps.
“I keep about $10 billion worth of equipment here,” Huang said, holding up a Blackwell prototype. “The next one will cost $5 billion to develop. Fortunately, it gets cheaper from there.” When you put a lot of these chips together, you can create really impressive power.
The previous generation of AI-optimized GPUs was called Hopper. Depending on how you measure it, Blackwell is 2 to 30 times faster. Huang explained that creating the GPT-MoE-1.8T model took him 8,000 GPUs, 15 MW, and 90 days. The new system can use his 25% of the power using just 2,000 GPUs.
These GPUs are pumping huge amounts of data into their surroundings. This is a very good segue to another topic that Huang talked about.
what's next
Nvidia has rolled out a new toolset for automakers working on self-driving cars. The company was already a major player in robotics, but it went from strength to strength with the introduction of new tools for roboticists to make robots smarter.
The company also introduced Nvidia NIM, a software platform aimed at simplifying the deployment of AI models. NIM aims to leverage Nvidia hardware as a foundation and accelerate enterprises' AI efforts by providing an ecosystem of AI-enabled containers. It supports models from a variety of sources, including Nvidia, Google, and Hugging Face, and integrates with platforms such as Amazon SageMaker and Microsoft Azure AI. NIM will expand its capabilities over time, including tools for generative AI chatbots.
“Anything that can be digitized means that patterns can be learned as long as there is a structure to which the patterns can be applied,” Huang said. “And if you can learn patterns, you can understand meaning. Once you understand meaning, you can also generate it. And we are in a generative AI revolution.”