6 Bold Statements By Nvidia CEO Jensen Huang On AI’s Future

At a fireside chat at SIGGRAPH 2024 in Denver last week, Nvidia CEO Jensen Huang talks about how AI will impact jobs, why Nvidia’s success happened against major odds, and why he believes GPUs and generative AI will reduce energy consumption on the Internet.

Jensen Huang has led Nvidia to become one of the world’s most valuable companies thanks to decades of investments in accelerated computing capabilities that enabled the first wave of big generative AI applications, then ignited a flurry of development.

At a fireside chat at the SIGGRAPH 2024 conference in Denver last week, the CEO and founder of the AI computing giant made several bold statements about the transformative power of AI, his confidence in the tech industry’s ability to address major challenges with AI and why he believes Nvidia will remain a central player for the foreseeable future.

[Related: The 10 Biggest Nvidia News Stories Of 2024 (So Far)]

“Everyone is moving from CPUs to accelerated computing because they want to save energy. Accelerated computing helps you save so much energy: 20 times, 50 times and doing the same processing,” said Huang, who was named CRN’s second most influential executive of 2024. “So the first thing that we have to do as a society is accelerate every application we can.”

What follows are six bold statements Huang made regarding how AI will impact jobs, how the tech industry is dealing with AI models that reproduce false or misleading information, how Nvidia went against the odds to get where it is now along with why and how he thinks the industry will solve AI’s growing energy consumption problems.

Huang: AI Will ‘Very Likely’ Change Everyone’s Jobs

When Huang was asked to what extent he thinks AI will augment tasks or even replace tasks done by humans, the Nvidia CEO said he thinks it’s “very likely” that AI will change everyone’s job, including his own.

This change will come in the form of AI assistants, which Huang believes “everyone” will have at some undefined point in the future.

“Every single company, every single job within the company, will have AIs that are assistants to them,” he said.

Huang pointed out that this change is already happening at Nvidia.

“Our software programmers now have AIs that help them program. All of our software engineers have AIs that help them debug software. We have AIs that help our chip designers design chips,” he said.

On that last point, Huang said two of the company’s latest GPU architectures, Hopper and Blackwell, “wouldn’t be possible” without AI.

“None of the work that we do would be possible anymore without generative AI,” he said before going into more examples of AI use cases for Nvidia employees.

“That’s increasingly the case with our IT department helping our employees be more productive. It's increasingly the case with our supply chain team, optimizing supply to be as efficient as possible, or our data center team using AI to manage the data centers to save as much energy as possible,” he added.

Huang: The Tech Needed For More Accurate AI Is Here

Huang said he believes the technologies necessary to make generative AI tools more controllable and accurate are already here.

The Nvidia CEO was responding to a question about how he thinks developers and researchers can tackle one of the biggest issues with AI models right now: their propensity to “hallucinate” and provide incorrect or misleading information.

[Related: HP Bets On Startup To Help Developers Create Trustworthy AI Models]

Huang pointed to three technological breakthroughs that are helping tackle hallucinations and related issues with generative AI models.

The first, he said, came from OpenAI’s game-changing ChatGPT chatbot, and it’s called “reinforcement learning human feedback.” This involves the use of humans “to produce the right answers or the best answers, to align the AI on our core values or align our AI on the skills that we would like it to perform,” according to Huang.

“That's probably the extraordinary breakthrough that made it possible for them to open ChatGPT for everyone to use,” he said.

Another breakthrough, according to Huang, is what’s called “guardrailing.”

This technology—which Nvidia makes available for large language models through its NeMo framework on top of offerings from other companies—"causes the AI to focus its energy or focus its response in a particular domain so that it doesn't wander off and pontificate about all kinds of stuff that you've asked it about,” the CEO said.

The third breakthrough that is making more accurate and controllable is retrieval-augmented generation, or RAG for short, according to Huang.

RAG essentially allows an AI model to provide answers based on information it fetches within a vectorized database. These databases can hold a variety of different types of information depending on the application of the AI model.

“It might be all of the articles that you've ever written, all of the papers you've ever written, and so now it becomes an AI that's authoritative [on you] and could be essentially a chatbot of you,” Huang said.

“So everything that I’ve ever written or ever said could be vectorized and then created into a semantic database and then before an AI responds, it would look at look at your prompt and it would search the appropriate content from that vector database,” he added.

Huang: ‘You Have To Will The Future Into Existence’

With Nvidia rising to become one of the world’s most valued companies this year, there is no doubt it has been one of the top beneficiaries of the generative AI goldrush.

But while Nvidia had the right GPUs, network chips, systems and software in place to enable groundbreaking AI capabilities like ChatGPT and the whirlwind of development that followed, Huang said it only happened because Nvidia worked against the grain and pushed hard to alter the trajectory of computing.

“Things have never trended in our direction. You have to will the future into existence,” he said.

The main tailwind Nvidia had to deal with was the tech world’s embrace of general-purpose computing via CPUs, which Huang called the “easy” route.

“You have the software. It runs twice as fast every single year. Don’t even think about it. And every five years, it’s 10 times faster. Every 10 years, it’s 100 times faster. What’s not to love?” he said.

The issue with general-purpose computing, according to Huang, was that “you could shrink a transistor, but you can’t shrink an atom and eventually, the CPU architecture ran its course.” Huang has previously declared several times the death of Moore’s law, which the industry has traditionally relied on for continuous improvements in performance and efficiency, for this reason.

The diminishing returns of CPUs is why Nvidia developed accelerated computing with its GPUs, Huang said, but the work it takes to enable a variety of applications on such high-performance processors is “super hard.”

For example, he said, Nvidia’s work to accelerate data processing workloads on GPUs with the company’s cuDF software library was “insanely hard.”

“What's inside those tables could be floating point numbers, 64-bit integers. They could be numbers and letters and all kinds of stuff. And so we have to figure out a way to go compute all that,” Huang said.

This reflects how Nvidia must learn how an application works every time the company wants to expand into another market, according to the CEO.

“That's the reason why I'm working on robotics. That's the reason why I'm working on autonomous vehicles: to understand the algorithms that's necessary to open up that market and to understand the computing layer underneath it, so that we can deliver extraordinary results,” Huang said.

Huang: GenAI Will Reduce Internet’s Energy Consumption

The high and ever-growing energy needs of AI applications was a big topic during Huang’s fireside chat, and the CEO pointed to a few ways Nvidia will help solve the issue, including a claim that generative AI will reduce the Internet’s energy consumption.

Huang said the traditional way of computing for the Internet is called “retrieval-based computing,” which involves fetching pre-existing data that sits in a data center.

“Everything is prerecorded. All the stories are written prerecorded. All the images are prerecorded. All the videos are prerecorded. And so everything is stored off in a data center somewhere prerecorded,” he said.

According to Huang, “generative AI is going to reduce the amount of energy on the Internet because instead of having to go retrieve the information, we can generate it right there on the spot because we understand the context.”

“We probably have some content already on the device, and we can generate the response so that you don't have to go retrieve it somewhere else,” he added.

Huang: ‘AI Doesn't Care Where It Goes To School’

Another way Huang thinks the tech industry will address AI’s high power needs is by building data centers in areas where there’s “excess energy.”

“Today's data centers are built near the power grid where society is, of course, because that's where we need it. In the future, you're going to see data centers being built in different parts of the world where there's excess energy,” he said.

Huang supported this belief with what appeared to be an adage of his own making—“AI doesn’t care where it goes to school”—referring to the idea that it doesn’t matter where a model is trained, because it will be used elsewhere.

Huang: ‘Accelerated Computing Reduces Energy Consumed’

Huang reiterated a claim he and Nvidia have made for years about the main appeal of accelerated computing: that GPU acceleration uses less energy than general-purpose CPUs to achieve the same level of performance or higher.

“Accelerated computing reduces [the] energy consumed, allows us to sustain computational demand without all of the power continuing to grow with it,” he said.

[Related: Nvidia CEO Explains How AI Chips Could Save Future Data Centers Lots Of Money]

The CEO said this is important because there is a growing number of tech companies developing AI models with cutting-edge capabilities that require computers to move and process a significant amount of data.

“Everyone is moving from CPUs to accelerated computing because they want to save energy. Accelerated computing helps you save so much energy: 20 times, 50 times and doing the same processing,” Huang said. “So the first thing that we have to do as a society is accelerate every application we can.”

The chief executive pointed to other kinds of workloads that can benefit from accelerated computing, like data processing and high-performance computing.

“Energy density is higher, but the amount of energy used is dramatically lower,” he said.