How Dell, Lenovo And Supermicro Are Adapting To Nvidia's Fast AI Chip Transitions

‘The entire industry has to learn a new muscle of going from what is a two-year server cycle to a one-year GPU cycle with very accelerated demand,’ says Dell Technologies executive Arun Narayanan of Nvidia’s annual AI chip releases that are making OEMs move faster than ever.

AI data. innovations and technology. AI text on CPU. Artificial Intelligence digital concept. Lenovo executive Vlad Rozanovich thinks the annual cadence of Nvidia’s AI chip and platform releases is a “beautiful thing” because of how the quick transitions will allow data centers to boost rack-level AI performance by several magnitudes every year.

But to service the tech industry’s computation needs that Nvidia founder, President and CEO Jensen Huang said are expected to grow substantially due to the rise of AI reasoning models, OEMs like Lenovo, Dell Technologies, Hewlett Packard Enterprise and Supermicro must move faster than ever before to ensure they can release competitive products based on Nvidia’s fast-growing portfolio of AI computing platforms.

[Related: 10 Big Nvidia GTC 2025 Announcements: Blackwell Ultra, Rubin Ultra, DGX Spark And More]

And with multiple generations of Nvidia platforms for customers to choose from, OEMs must also do their best to order the correct mix of platforms—which now include Hopper products from the past two years, the recently launched Blackwell products and Blackwell Ultra-based offerings coming later this year—to ensure they can fulfill demand without building up too much excess inventory.

“When you look at a traditional enterprise-type of engagement, Lenovo has always been known for the quality of our servers, and in the past, we’ve always had to do nearly a year of testing before we introduce a new product,” Rozanovich, senior vice president of Lenovo’s Infrastructure Solutions Group, told CRN at Nvidia’s GTC 2025 event last month.

“And with the cycle times that Nvidia is putting out there, it puts a lot of pressure on us. We always have to keep thinking 12 months ahead,” he added.

The challenge of managing Nvidia’s fast AI chip transitions was underlined less than two weeks before Nvidia at GTC unveiled Blackwell Ultra and the succeeding AI computing platforms to come in 2026 and 2027. This is when HPE said it suffered from lower server margins in part because of higher-than-normal AI server inventory caused by the rapid transition to Blackwell from Nvidia’s Hopper generation.

“What that means is when you look at the segmentation of AI, you have service providers and model builders that lead with time to market with the latest technologies [such as Blackwell],” HPE President and CEO Antonio Neri told CRN in early March. “In [the first quarter], we booked $1.6 billion of new AI orders, which was up double digits year over year, and we doubled the order bookings sequentially. But 17 percent of that was Blackwell.”

Neil Anderson, vice president and CTO of cloud, infrastructure and AI solutions at solution provider powerhouse World Wide Technology, told CRN that Nvidia’s fast AI chip transitions represent a big change in the way data center buying cycles have traditionally worked.

“The typical life cycle in the IT industry—if you look at [general-purpose] compute servers [and] networks—it’s been seven, eight [years]. Some customers have 10-year-old technology in their data center, and it’s running just fine,” said Anderson, whose St. Louis-based company ranked No. 7 in CRN’s 2024 Solution Provider 500 list.

“So our customers are used to that life cycle of a pretty lengthy buying pattern, but they’re very quickly realizing this is very different than that,” he said.

The new normal created by Nvidia is resulting in what Anderson calls a “conveyor belt of technology” that customers “are still trying to understand.”

“We tell customers, ‘Look, that conveyor belt, or the annual cadence, is not going to change. So you can’t sit on the sidelines and go, Well, I’m going to wait for the next one; I’m going to wait for it,’ because it’s always changing. You’ve got to get started on something. And so that’s a difficult conversation sometimes with customers,” he said.

Nvidia CEO Jokes About Rapid Obsolescence

When Nvidia announced a shift in its release schedule for new GPUs and their associated hardware platforms in October 2023, the company said it would move to a “one-year rhythm,” a shift from its previous strategy of releasing AI chips roughly every two years.

The Santa Clara, Calif.-based company said it would speed up AI data center product releases roughly a year after ChatGPT kicked off a massive wave of spending on AI development, which raised Nvidia’s profile as a critical supplier for such workloads.

At the time, Nvidia had been shipping its Hopper-based H100 GPU and associated platforms for several months, roughly two years after the company first made its Ampere-based A100 GPU products available.

The accelerated road map meant that the company planned to launch the Hopper-based H200, which increased the high-bandwidth memory capacity, in the first half of 2024, roughly a year after the H100’s debut. Nvidia’s next-generation platform, based around its Blackwell GPU, began shipping through partners this past January.

In late January, Nvidia made clear how well the fast product transitions were paying off when it announced that it generated $11 billion in revenue from Blackwell in the fourth quarter of its 2025 fiscal year, making it the company’s “fastest product ramp” yet.

During Huang’s keynote at GTC last month, he illustrated this in another way by showing that the top four U.S. cloud service providers—Amazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure—bought 3.6 million Blackwell GPUs in 2025 so far in contrast to the 1.3 million Hopper GPUs they bought last year.

The reason customers are rushing to buy Blackwell, Huang said later in his keynote, are the significant increases in performance they can get in the same power envelope as a Hopper-based platform, claiming that “in a reasoning model, Blackwell is 40 times the performance of Hopper, straight up.”

This led Huang to suggest that Blackwell’s major improvements would make Nvidia’s Hopper-based GPUs obsolete.

“I said before that when Blackwell starts shipping in volume, you couldn’t give Hoppers away, and this is what I mean,” said Huang, who jokingly called himself the “chief revenue destroyer” because of his move to publicly shun Hopper-based GPUs.

“There are circumstances where Hopper is fine. Not many,” he added.

Lenovo Has Become ‘Very Conscious’ About Purchasing

Despite Huang’s suggestion that Blackwell’s significant performance advances will make older GPUs less appealing, Rozanovich said Lenovo still sees demand for less-powerful products like the H200 and even the L40S, which has less AI horsepower than the Hopper GPUs but, unlike those products, features graphics and media acceleration capabilities for things like rendering and simulation workloads.

“From a Lenovo perspective, we’re probably selling more H200 and even H100, and even things like L40S. What we have seen is that L40S is really the perfect solution for OVX and Omniverse,” he said, referring to Nvidia’s OVX server platform that is optimized to run its Omniverse software for a variety of commercial 3-D applications.

At the same time, Lenovo has a “lot of customers” asking about its new products based on Nvidia’s Blackwell-based B200 and GB200 chips as well as the Blackwell Ultra-based GB300 that is set to debut later this year, according to Rozanovich.

These and future platforms have significantly higher power requirements than the Hopper-based systems that came before them, with today’s GB200 NVL72 platform consuming 120 kilowatts and next year’s Vera Rubin platform cranking things up to 600 kilowatts.

“People are now trying to comprehend the power increases, and they’re also trying to comprehend the cost increases,” Rozanovich said.

The Lenovo executive said Nvidia GPU supply is not nearly as constrained as it was a year ago, when the company was seeing lead times greater than 50 weeks for such chips and Lenovo, like others, “had to make bets to make sure we had inventory levels.”

But the consequence of the improving availability of Nvidia’s products, according to Rozanovich, is that Lenovo has become “very conscious” about how much product it orders from the AI computing giant “because this inventory is expensive.”

“How do we make sure we have the right pipeline that’s robust enough to consume that inventory? How do we make sure we don’t overbuy, but how do we make sure we don’t underbuy either?” he said.

What helps Lenovo in this situation is how distributed the company’s business is across the world, “which gives us multiple outlets and opportunities,” according to Rozanovich.

As for how the Chinese tech giant helps its channel partners figure out where to focus, Rozanovich said it comes down to the use case, the technology and how long the hardware will be used before the customer plans to upgrade.

“What’s great about our channel partner relationships is we’re pretty transparent with them to say, ‘Here’s where we think the market is, here’s where we think it’s going to be by the time a customer is going to make their decision’ and [then] making sure that we’re aligning [on a] joint go-to-market,” he said.

Demand Signals For AI Servers Are ‘Much Better’ Than Regular Servers

Dell executive Arun Narayanan said while there are “peaks and bubbles of inventory,” his company has “done a pretty nice job” managing Nvidia’s fast AI chip transitions. But he admitted that the motions do represent a “new muscle” for OEMs and the like.

“The entire industry has to learn a new muscle of going from what is a two-year server cycle to a one-year GPU cycle with very accelerated demand,” said Narayanan, senior vice president of compute and networking portfolio management in Dell’s Infrastructure Solutions Group, in an interview with CRN at last month’s GTC event.

Up until Nvidia’s GTC event last month, the Dell executive said the company was still seeing “strong demand” for last year’s H200 GPUs, particularly among smaller customers. But he acknowledged that the situation could change after Huang revealed a slate of new AI computing platforms coming out over the next three years.

“After today’s Jensen announcement, I don’t know, but as of yesterday morning, it was pretty strong,” Narayanan said.

At the same time, the Round Rock, Texas-based company has benefited from the unprecedented ramp of Nvidia’s newly released Blackwell platform, which is selling much faster for Dell than Hopper-based platforms, according to the executive.

“It’s incredibly fast. We’ve seen some mega, mega deals, and a lot of customer interest is ramping really, really fast,” he said. “I can tell you that even from a Dell perspective, the Hopper generation ramp was in the three- to six-month time frame. Here, it’s in the 30- to 60-day time frame. It’s massive. It’s very, very quick.”

As for the Blackwell Ultra platform coming later this year, Narayanan said he believes demand will be similar to that of Blackwell. He expects the ratio of Blackwell-Blackwell Ultra sales will be 4-to-1 in this year’s third quarter, then 3-to-2 in the fourth quarter.

“And then it flips the equation as you get into next year,” he said.

In the face of these quick platform transitions, Dell is “very careful about how much purchasing we do,” according to Narayanan.

While this means the company must ask its customers to “tell us what the demand is well ahead of time,” he said, the bright side is that many of these customers are planning AI data centers far in advance, which helps Dell with forecasting.

“The demand signal in the market is much better than regular servers because customers have to plan for this. It’s not just the server. It’s the entire ecosystem. So we are beginning to see that a lot, and that helps us understand what the demand profile is going to be and place our bets with the right silicon vendors,” Narayanan said.

In terms of how Dell talks to channel partners about where to place their bets, the executive said Hopper-based platforms are their best bet for now when it comes to getting new data centers up and running as soon as possible. But it’s ultimately about timing demand to when the latest and greatest platform is available, he added.

“You need to think about timing [for when] your demand is and position it. That’s what our communication to our partners is. Our internal communication to sales teams is the same thing,” Narayanan said.

For Inference, The Platform Of Choice Varies

Supermicro executive Vik Malyala is on the same page when it comes to lining up customer demand with the latest platforms Nvidia has made available. But he said it also depends on the customer’s workload—if the customer knows what it wants to do.

“From a customer’s point of view, the more they have an understanding of what they want to do with it, it’s going to help them and us,” said Malyala, who is senior vice president of technology and AI as well as president and managing director of the Europe, Middle East and Africa territories for the San Jose, Calif.-based server vendor.

For training models, “it absolutely makes no sense for people to look beyond what the [Blackwell-based] GB200 and the B200 [are] able to offer because [they’re] providing the highest-performing platform today,” he said at last month’s GTC event.

That conversation then shifts to the Blackwell Ultra-based GB300 and B300 platforms for customers that are looking at training needs in the second half of this year, he added.

For inference, however, the situation is more nuanced, according to Malyala.

If a customer’s workload relies on 16-bit floating-point (FP16) precision, an older GPU like the L40S, H100 or H200NVL would suffice, he said. But if they can take advantage of the smaller 4-bit floating-point (FP4) format, which is only supported by Blackwell and future platforms, the B200 or GB200 would work better—if their data centers can support the power requirements.

“So now we are actually getting into conversations with customers on what workloads that they are running in,” Malyala said.

The Supermicro executive said it’s very helpful for Nvidia to provide a detailed road map for the AI computing platforms it plans to release over the next few years because it helps everyone from vendors to customers plan their investments accordingly.

“These things do take a lot more time to build, so I’m glad that was presented,” he said.

Nvidia Wants Partners To Ramp ‘As We Continue To Innovate’

Dion Harris, senior director of high-performance computing and AI factory solutions go-to-market at Nvidia, said the company has become “more forthcoming and transparent” with not just its product road map but also the demand signals it’s seeing for each product.

“I’ve been in some meetings where we’re working with [data center partners] like Schneider Electric, Vertiv and others where we’re literally having co-engineering design meetings to say, ‘When we build this [next-generation, Rubin Ultra-based] Kyber [platform], what do your products and systems need to look like?’” he said at last month’s GTC event.

“But we’re taking a step further, saying this is the demand we are seeing for these products that are hitting in this time frame,” he added.

Huang, Nvidia’s CEO, “usually encourages people, ‘Don’t buy all that you can now. Buy some this year because we’re going to have a new architecture that comes out the following year, and it’s going to be even better,’” according to Harris.

“Allow yourself to ramp and grow as we continue to innovate,” he added.

Ian Buck, vice president of hyperscale and high-performance computing at Nvidia, said the announcements Huang made at GTC about Blackwell Ultra and future platforms showed that Nvidia is “becoming an infrastructure company.”

This means the AI computing giant has a lot more responsibility for supporting and preparing its partners, which include AI model developers now.

“For all of those foundational model builders, for the future AI that’s coming, they need to know what is coming on our road map to know what to go build, how many billions and trillions of parameters [will be supported], and what is going to be the art of the possible when they get there, so that they are informed to create the demand that’s going to meet the supply [Nvidia is creating],” he said.

Close