Nvidia CEO Jensen Huang: 10 Bold Statements From GTC 2022
Nvidia CEO Jensen Huang held an unscripted press conference after last week’s GTC where he frankly and openly answered questions about Nvidia’s Intel relationship, its failed attempt to acquire ARM, where Nvidia fits in terms of software and AI, and the very definition of Nvidia.
Jensen Huang Unplugged
At the Nvidia GPU Technology Conference held virtually in late March, Nvidia CEO Jensen Huang showed all the energy and passion he’s known for when discussing the latest in GPUs, AI technologies and trends, and the company’s hardware and software endeavors.
However, after all the scripted product introductions, technology presentations, and roadmap discussions, Huang swapped his normal well-polished stage presence for a small wooden chair and met the press including CRN who were invited to ask him anything. And ask they did, on a wide range of topic that covered everything from Nvidia's failed attempt to acquire ARM to the promises and dangers of the kinds of AI technologies that Nvidia is building. Along the way, he also discussed the possibility of Nvidia working with Intel's upcoming foundry services, how Nvidia both partners and competes with Intel, and the importance to businesses of making it easy for employees to work from home or from wherever they feel comfortable.
Huang also told the press, and through them the entire IT industry, to not think of Nvidia as a GPU company, but as a computing platform company. “We’ve been working across that entire stack—we call it our four-layer stack—so that we can reinvent computer science and computing for the next decade where machine learning and artificial intelligence and data-driven approaches are going to be central to almost everything we do,” he said.
Huang unplugged from the lights and cameras of a major industry event offers an in-depth look at Nvidia, the IT industry, and where both are going. Click through the slideshow to learn more.
Nvidia Is A Computing Platform Company, Not A GPU Company
We’ve been working on several things over the last decade or so, and some really important things have happened. The way that computer science is done, the way that software is created, and what software can now do has fundamentally changed. The computers and the computer systems that we built in the past were developed largely for human software developers to write software. And the deployment of that software has evolved over the years to be augmented by artificial intelligence and machine learning.
And so [Nvidia has] been working on a whole stack of new computing that starts from the chips to the systems, to the entire data center, across all of its software, the engines, if you will, the middleware, the SQL engines and such—we call it Nvidia AI and Nvidia Omniverse—and the type of tools and frameworks necessary to build these applications. We’ve been working across that entire stack—we call it our four-layer stack—so that we can reinvent computer science and computing for the next decade where machine learning and artificial intelligence and data-driven approaches are going to be central to almost everything we do. And so yesterday we announced products across the full stack. And I think it’s quite evident that Nvidia has evolved from a GPU chip company into a computing platform company.
Remote Work: Comfortable With Being A Digital Company
Nvidia has moved faster in the last couple years than potentially its last 10 years combined. And it’s possible that we’re very comfortable being a digital company. It’s possible that we’re quite comfortable working remotely and collaboratively across the whole planet. It’s quite possible that we work better actually when we allow our employees to choose when they’re most productive and let them optimize, let mature people optimize their work environment and their work timeframe and their work style around what best fits for them and their families. And so it’s very possible that all of that is happening. It’s also absolutely true that it has forced us to put a lot more energy into the virtual work that we do. For example, the work around Omniverse really went into light speed in the last couple of years because we needed it. Instead of being able to come into our labs to work on our robots, or go on to the streets to test our cars, we had to test them in virtual worlds, in digital twins. And we found that we could iterate our software just as well in digital twins, if not better. And we could have millions of digital twin cars, not just a fleet of a hundred. And so there are a lot of things that I think either one it’s possible that the world doesn’t have to get dressed and commute and go to work and commute. Maybe that’s that this hybrid work approach is quite good. But it’s definitely the case that forcing ourselves to be more digital than before, more virtual than before, has been a positive.
Diversifying To Meet Chip Supply Challenges
[When] we started to experience challenges in the supply chain, the first thing that we did was we started to create diversity and redundancy, which is the first principle of resilience. And we realized we needed more resilience going forward. And so over the last couple of years, we’ve built in diversity in the number of process nodes that we use. So we qualified a lot more process nodes. We’re in more fabs than ever. We’ve qualified more substrate vendors, more assembly partners, more system integration partners. We’ve second sourced and qualified a whole bunch more external components. So we’ve expanded our supply base probably four-fold in the last two years. And so that’s one of the areas we’ve dedicated ourselves, otherwise Nvidia’s torrent growth rate wouldn’t have been possible. And this year we’re going to grow even more.
And so I think that, when you’re confronted with adversity and challenges, it’s really important to go back to first principles and say to yourself: This is not likely going to be a once in a lifetime thing. What could we do to be more resilient? What can we do to diversify and expand our supply base?
Impact From The Falling Through Of Nvidia's Proposed ARM Acquisition
ARM is a one-of-a-kind asset. It’s a one-of-a-kind company. You’re not going to build another ARM. It took 35 years to build. You can build something else, but you won’t build that. Do we need it as a company to succeed? Absolutely not. Would it have been wonderful to own such a thing? The answer is absolutely yes. And the reason for that is because, as company owners, you want to own great assets. You want to own great platforms. Of course, I’m disappointed that we didn’t get it through. But the result is that we built wonderful relationships with the entire management team of ARM, and they understood the vision our company has for the future of high performance computing, and they’re excited about it.
I think that naturally caused the roadmap of ARM to be much more aggressive in the direction of high performance computing, where I needed them to be. And so I think that the net result of it is inspired leadership for the future of high performance and the direction that it’s important to Nvidia. It’s also great for them because that’s where the next opportunities are. Mobile devices are still going to be around, and they’re going to do great. However, the next big opportunities are in these AI factories and cloud AIs and edge AIs. This way of developing software is so transformative, and we’re just seeing the tip of the iceberg of it.
As relates to our internal developments, we got even more excited about ARM, and you could see how much we double down on the number of ARM chips that we have. The robotics ARM chips, we’ve got several that are now in development. Orin is in production this month, and it’s a home run for us. And so we’re going to build a whole lot more in that direction. The reception of Grace has been incredible. I think we wanted to build a CPU that is very different than what’s available today and solves a very new type of problem that we know exists out in the world of AI. And so we went and built Grace for that. And we surprise people with the idea that it’s a superchip, not a collection of chiplets, but the connection of chip superchips and the benefits of doing that. You’re going to see a lot more of that in that direction. So I think our technology innovation around ARM is turbo charged as well.
On Being A Partner Or Competitor Of Intel As It Moves To Fabrication
Our strategy is to expand our supply base with diversity and redundancy at every single layer: at the chip layer, at the substrate layer, at the assembly layer, at the system layer, at every single layer. We’ve diversified the number of nodes. We’ve diversified the number of foundries. And Intel is an excellent partner of ours. We qualify their CPUs for all of our accelerated computing platforms. When we pioneer new systems, like we just did with Omniverse computer—we partnered with them to build the first generation of the Omniverse computers—our engineers worked very closely together. They’re interested in us using their foundries. We’re very interested in exploring it.
Being a foundry at the caliber of a TSMC is not for the faint of heart. This is a change not just in process technology and investment of capital, but it’s a change in culture from a product-oriented company, technology-oriented company to a product technology and service-oriented company. And it’s not a service-oriented company as in bringing you a cup of coffee, but a service-oriented company as in really mimicking and dancing with your operations. TSMC dances with the operations of, what, 300 companies worldwide. And our own operations is quite an orchestra. And yet they dance with us. And then there’s another orchestra that they dance with. And so the ability to dance with all of these different operations teams, supply chain teams, is not for faint of heart, and TSMC does it just beautifully.
And so it’s management, it’s culture, it’s core values. And they did that on top of technology and products. And so I’m encouraged by the work that is done at Intel. I think that this is a direction they have to go. And we’re interested in looking at the process technology. So I think our relationship with Intel is quite long. We work with them across a whole lot of different areas. Every single laptop, every single PC, every single server, every single supercomputer, we collaborate.
Staying Focused On Organic Growth
Nvidia is genetically organically grown. We prefer to build everything ourselves. Nvidia has so much technology, so much technical strength. And the world’s greatest computer scientists are working here. And so we are organically built as a natural way of doing things. However, every so often something amazing comes out. A long time ago, the first large acquisition we made proportionally at the time was 3dfx. That was because 3dfx was amazing. The computer graphics engineers there, the computer graphic scientists there, are still working here. Many of them built the latest generation GPU. And so 3dfx was the first.
The next one that really you could highlight is Mellanox. It was a once in a lifetime thing. You’re not going to build another Mellanox. The world is never going to have another Mellanox. It’s not going to happen again. And so it’s a company that has the combination of the incredible talent, the platform that they created, the ecosystem that they’d built over the years that’s integrated into the world, all of that. You’re not going to recreate that. And then the next one: You’re never going to build: another ARM. ...
I think that I have great partnerships with the world’s computer industry. And there are very few Mellanoxes, There are very few ARMs. And so the good thing is that we are so good at organic growth. Look at all the new ideas we have every year. That’s our generic approach.
Nvidia Potential Relationship With Intel Foundry Services
With respect to Intel, foundry discussions take a long time. And it’s not just about desire, but we have to align technology. The business models have to be aligned. The capacity has to be aligned. The operations process and the nature of the two companies have to be aligned. It takes a fair amount of time, and it takes a lot of deep, deep discussion. You know, we’re not buying milk here. This is really about integration of supply chains. Our partnership with TSMC and particularly Samsung in the last several years is something that took years to cultivate. And so we are very open minded to considering Intel. And I’m delighted by the efforts that they’re making.
Off-the-shelf Vs. Custom Components
Our preference is to use off-the-shelf. If somebody else is wanting to do something for me, I could save my engineering to go do something else. And so on balance we tried—well, not on balance—we always try not to not to do something that could be available somewhere else. And we encourage third parties, and we encourage our partners, to lean in the direction of building something that would be helpful to us so that we could just take it off the shelf. And I think over the last couple of years, ARM’s roadmap has steered higher and higher performance, which I just love. It’s fantastic. I can just use it.
What makes [Nvidia’s new ARM-based CPU] Grace special is the architecture of the system around Grace, and very importantly, the entire ecosystem above it. Grace is going to have predesigned systems that it could go into. And very importantly, Grace is going to have all of Nvidia software that it could instantly be able to benefit from. When we were working with Mellanox when they came on board, we ported all of the video software onto Mellanox and the benefits and the value to the customers are in ’X’ factors. And so we’re going to do the same thing with Grace. And so I think on balance, if we can take it off the shelf because they have the CPUs with the level of performance we need.
And ARM builds excellent CPUs. The fact of the matter is, the engineering team is world class. However, anything that they prefer not to do, we are transparent with each other. And if we need to, we’ll build our own and we’ll do whatever it takes to build amazing CPUs. We have a very significant CPU design team, and world-class CPU architects. We could build whatever we need. So anyways, our posture is to let other people do it for us and differentiate upon that
Intel As A Competitor
First of all, we’ve been working closely with Intel, sharing with them our roadmap long before we share with the public for years. Intel has known our secrets for years. AMD has known our secrets for years. And we are sophisticated and mature enough to realize that we have to collaborate. We work closely with Broadcom, work closely with Marvell. We work closely with Analog devices, with TI (Texas Instruments). We work closely with everybody. Micron and Samsung, and, oh my goodness, the list goes on. We share roadmaps, of course under confidentiality and very selective channels of communications. And the industry has just learned how to work in that way. And so on the one hand we compete with many, many companies. We also partner deeply with them and rely on them. If not for AMD CPUs that are in DGX, we wouldn’t be able to ship DGX. If not for Intel CPUs and all of the hyperscalers connected to our HGX, we wouldn’t be able to ship HGX. If not for Intel’s CPUs in our Omniverse computers that are coming up, we wouldn’t be able to do the digital twin simulations that rely so deeply on the single thread performance that they’re really good at. There are a lot of things that we do that are commingled that way. And that what I think makes Nvidia special is that over the years, we have built up a really diverse and robust and now quite an expanded-scale supply base. And that allows us to continue to grow quite aggressively.
The second thing is that we are a company like one that’s never been built before where we have a core chip technology that is world-class at each one of their levels: world-class GPU technology, world-class networking technology, world-class CPU technology. And that’s layered on top of systems that are quite unique. And then blueprints are shared with the rest of the industry, right from inside this company with software stacks that are engineered completely from this company. And one of the most important engines in the world, Nvidia AI, is utilized by 25,000 enterprise companies in the world and every single cloud in the world.
I think that stack is quite unique to us. And so we’re quite comfortable [and] work with our confidence in what we do. We’re very comfortable working with collaborators, including Intel and others. It turns out that paranoia is just paranoia. There’s nothing to be paranoid about. And it turns out people want to win, but nobody’s trying to ’get ya.’ And so we try to take the not-paranoid approach when we work with partners, and we try to rely on them, to let them know we’re relying on them, trust them, let them know that we trust them.
On The Potential For AI To Cause Damage
First of all, when we’re watching a movie, Iron Man is not real. And Yoda’s not real. And the light sabers are not real. They’re all deep fakes. And just about every single movie that we watch these days is really quite artificial. And yet we accept that because we know it’s not true. We know because of the medium that the information that’s presented to us is not intended to be news. It’s intended to be entertainment. If we could apply this basic principle to all information, it would surely be great. I do recognize that unfortunately it crosses the line of what is information mistruths and outright lies. And that line is difficult to separate for a lot of people.
I don’t know that I have the answer for this. I don’t know that artificial intelligence is necessary to activate the problem even further. However, just as AI has the ability to create fakes, AI has the ability to detect fakes. And we need to be much more rigorous in applying AI to detect fake news, detect fake facts, detect fake things. And so that’s an area that a lot of computer scientists are working on. And I’m optimistic that the tools that they come up with will be more rigorous in helping us decrease the amount of misinformation that consumers are unfortunately consuming today with little discretion. And so I’m looking forward to that.