‘Nvidia AI Computing By HPE’ And HPE Private Cloud AI: Details Of The Blockbuster AI Partnership
HPE has launched a comprehensive generative AI solutions portfolio co-developed and co-branded by AI kingpin Nvidia including a breakthrough HPE Private Cloud AI offering.
Hewlett Packard Enterprise Tuesday launched a comprehensive generative AI solutions portfolio co-developed and co-branded by AI kingpin Nvidia including a breakthrough HPE Private Cloud AI offering.
The new ‘Nvidia AI Computing By HPE’ portfolio unveiled at HPE Discover provides a complete co-developed generative AI compute, storage and networking portfolio to accelerate AI adoption that also extends into the channel with joint enablement and training program with combined business market development funds (MDF) for partners.
[RELATED: Partners See Nvidia AI Computing By HPE As A GenAI Infrastructure ‘Game-Changer’]
The first-ever co-developed, co-branded and joint go-to-market model for HPE aims to go much deeper with GenAI compute, storage and networking infrastructure integration with Nvidia than rivals Dell Technologies and Cisco.
The announcement, which is being showcased in the first-ever keynote at the dome-shaped Sphere arena in Las Vegas with HPE CEO Antonio Neri and Nvidia CEO Jensen Huang, aims to establish a new benchmark for delivering generative AI solutions to enterprise customers.
The biggest game changer in the expansion of the decades-long partnership between the two companies is an HPE Private Cloud AI solution developed with Nvidia that is designed to be up and running with just three clicks.
“Nvidia and HPE didn't just want to bring Nvidia and HPE technologies together into a solution but we wanted to deliver an experience that would truly accelerate time to value for an enterprise adopting generative AI,” said HPE Executive Vice President Hybrid Cloud and CTO Fidelma Russo in a press briefing. “And this is why we have engineered the industry's first turnkey private cloud for AI.”
Russo said the breakthrough new offering – which is built on top of HPE’s GreenLake cloud service- is ready to run out of the box. “You plug it in, you connect to the GreenLake cloud and three clicks later – Boom- your data science and IT operations teams are up and running the Nvidia software stack,”she said.
The HPE-Nvidia collaboration on HPE Private Cloud AI ensures enterprise customers are “always guaranteed” the latest Nvidia AI enterprise and NIM software with HPE software, said Russo.
“So day zero and day one is fast and simple and day two is built to enhance productivity of every team deploying AI from pilots to production,” said Russo.
“We do not believe that there are similar offers in the marketplace,” she said. “We are confident that when you actually see this…you will see that this is a totally integrated system. You need very little services to get started and you are ready to deploy in minutes.”
Manuvir Das, vice president of enterprise computing for Nvidia, said the jointly developed HPE Private Cloud AI is being tuned and optimized “every night” at the internal Nvidia NIM AI factory.
That ensures, Das said, that enterprise customers are getting the “latest optimizations” with the Nvidia AI Computing By HPE portfolio.
“You know you are getting the latest optimizations and you are at the state of the art for executing models without really worrying about it at all,” Das said.
The HPE Private Cloud for AI turnkey solution will be available for partners to start quoting to customers as of July 8 and shipping in September, backed up by a full set of combined enablement tools and AI competencies from the two companies, said HPE Vice President of Worldwide Channel and Partner Ecosystem Simon Ewington.
Ultimately, HPE is providing a path from AI “opportunity to reality” for partners with the “deepest partnership with Nvidia we have seen anywhere in the industry,” said Ewington. “Other vendors have got reference architectures. We have come with a turnkey solution that is available to quote from July 8 and will be shipping in September.”
HPE Private Cloud AI With Nvidia – Four To Five Times More Cost Effective Than Public Cloud
HPE Private Cloud AI provides enterprise customers a generative AI solution that keeps enterprise data private on premise and is “four to five times more cost effective for inferencing workloads” than public cloud, said Russo.
Hewlett Packard Enterprise Executive Vice President Neil MacDonald said that HPE Private Cloud for AI will help enterprise customers avoid the crushing cost of running AI in the public cloud.
“The target customers for this (HPE Private Cloud for AI) are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets,” said MacDonald, the general manager of Compute, HPC (High Performance Compute) and AI for HPE.
MacDonald stressed that HPE Private Cloud AI is not a reference architecture that places the “burden” on customers to assemble their AI infrastructure by “cobbling together piece parts” whether that is GPUs, software or connectivity. “HPE and Nvidia have done the hard work for customers like co-developing a turnkey AI private cloud that is up and running in three clicks,” he said.
Russo said the HPE Private Cloud AI makes the adoption of generative AI “easy and simple” for the enterprise.
“With private cloud for AI, productivity is greatly enhanced by up to 90 percent for data engineers, data scientists and data operations teams,” she said.
Key to making the data journey for enterprise customers as “easy as possible” is an embedded data lakehouse that provides easy access to existing structured and unstructured data stores, whether they are on-prem or in the public cloud,” said Russo.
HPE is also providing storage for customers within the private cloud along with the tools and dashboards to “manage, secure and provide guard rails around your data models and your infrastructure which provides built in compliance,” said Russo.
Full HPE And Nvidia AI Channel Enablement Including Market Development Funds
The HPE and Nvidia partnership includes a comprehensive AI enablement program with AI tools, certification and training from both companies including joint market development funds (MDF).
“What we are now going to do is make sure that we are the natural technology partner for Nvidia-based solutions and we are doing that with our Private Cloud for AI solution and the enablement we’re driving in the channel,” said Ewington. “Ultimately we want to upskill our channel on AI.”
Jesse Chavez (pictured), vice president of worldwide channel partners program and operations for HPE, said the program even includes a joint MDF pool of funds to help partners win deals.
“Nvidia will recognize the fact that HPE is selling as part of the integrated solution a certain revenue stream from an overall Nvidia standpoint which will create an MDF pool,” said Chavez. “We are working together (on that MDF pool).”
Partners can submit MDF plans jointly to HPE and Nvidia and the two companies will decide which MDF proposals to go forward with, said Chavez.
HPE has already started doing joint business plans with partners selected as part of an AI pilot program, said Chavez.
“There are mutual investments from Nvidia, HPE and the partner also has to make an investment because this is a tri-funding model,” said Chavez.
The HPE-Nvidia channel enablement includes a new HPE AI Solutions Competency that leverages the latest Nvidia certificate programs. The new competency is aimed at providing a comprehensive enablement program for partners to recommend, deploy and manage a complete AI software and hardware solution stack.
HPE is also building out a sustainability competency that includes a collaboration with Nvidia on how to leverage a comprehensive portfolio of sustainable and power efficient technologies such as direct liquid cooling.
Further, HPE has a new compute competency aimed at helping partners recommend the “optimal Nvidia-certified HPE ProLiant” GenAI inference server to meet the customer’s inference price performance requirements.
HPE is also updating its Storage and Data Services Competency to include support for HPE GreenLake for File Storage which is now certified for Nvidia DGX BasePOD and Nvidia OVX systems.
Finally, in the high performance compute category, HPE is releasing a new High Performance Computing for Enterprise Competency that is aimed at enabling partners to architect and integrate the HPE Cray supercomputing portfolio, including Nvidia-certified Cray systems.
Ultimately, Ewington said the joint channel enablement is a “huge” breakthrough that will pay big dividends for partners.
“It’s a very, very compelling fully integrated enablement package that we have worked on with Nvidia and a turnkey (HPE Private Cloud AI) that is available for the partner community to quote in a couple of weeks time,” he said. “It’s pretty compelling and pretty groundbreaking.”
HPE Vice President of Worldwide GreenLake Partner and Service Provider Sales Uli Seibold said the HPE Private Cloud AI will save partners hundreds of hours – potentially even thousands of hours - in engineering by implementing the turnkey solution for customers.
“Now (with HPE Private Cloud AI) it's fully implemented and fully integrated,”he said. “Now they can start immediately integrating the data and implementing the right ISV application on top of it.”
Nvidia, HPE ‘Joint Go-To-Market Strategy’ Including Global Systems Integrators
The HPE and Nvidia partnership includes a joint go-to-market strategy that includes HPE global account managers and global system integrators Deloitte, HCLTech, Infosys, TCS and Wipro.
Seibold said the GSI partnerships ensure “the same quality across the globe” for data integration with critical ISV applications. “That is why we selected those big GSIs,” he said. “This will help us a lot to have a data integration layer from the first day on a global scale.”
Seibold said the Nvidia AI Computing by HPE offering including the HPE Private Cloud AI expands the total addressable market for the Powered by HPE GreenLake service provider business alone by $1.5 billion to $2 billion.
That does not even include the solution providers targeting enterprise customers who will now be able to tackle the AI opportunity with the turnkey HPE Private Cloud AI, said Seibold.
“Today it is a hyperscaler and a service provider game,”
said Seibold. “Our typical value-added reseller has not been heavily involved in the AI opportunity.This was not their core focus. Now they can use this fully integrated turnkey solution (from HPE and Nvidia) and start immediately (building AI solutions for their customers).”
Nvidia AI Computing By HPE - The HPE Private Cloud AI Portfolio
The HPE Private Cloud AI Portfolio includes four configurations that run from small to extra large with modular design that can be expanded, said Russo.
The entry level configuration – which is designed for inferencing- includes four or eight Nvidia L40S GPUs, 30 TB to 248 TB storage, 100GbE Nvidia networking with power of up to 8 kW a rack.
The medium level configuration includes 8 or 16 Nvidia L40S GPUs, 109 TB to 390 TB storage and 200 GbE Nvidia networking with power of up to 17.7 kW.
The large configuration includes 16 or 32 Nvidia H100 NVL GPUs, 670 TB to 1088 TB of storage and 400 GbE Nvidia networking with power of up to 25 kW x 2.
The Extra Large configuration includes 12 or 24 Nvidia GH200 NVL2 GPUs, 670 TB to 1088 TB storage with 800 GbE Nvidia networking and power of up to 25 Kw x 2.
“You can start with a few small model inferencing pilots and you can scale to multiple use cases with higher throughputs and you can have RAG or LLM fine tuning in one solution,” said Russo.
HPE is committed to introducing the latest Nvidia models for the HPE Private Cloud AI portfolio.
Three New Nvidia AI Computing Systems- ProLiant And Cray
HPE introduced three new systems developed in collaboration with Nvidia for AI solutions: the HPE ProLiant Compute DL384 Gen12 and the HPE ProLiant Compute DL380a Gen12 and the HPE Cray XD760.
“The foundation of all of this work with Nvidia are the systems that underpin all of those capabilities,” said MacDonald.
The new HPE ProLiant Compute DL384 Gen12 is designed for next level performance for memory intensive AI solutions. That system – which includes the Nvidia GH200 NVL2- is designed for LLM customers fine turning large models or RAG, said HPE. It is expected to be generally available in the fall.
The new HPE ProLiant Compute DL380a Gen12 features up to eight Nvidia H200 NVL GPUs and is designed for “ultra scalable GPU acceleration for enterprise AI,” said HPE. That model is aimed at LLM customers that need flexibility to scale their GenAI workloads, said HPE. It is expected to be generally available in the fall.
Finally, the HPE Cray XD670 with eight Nvidia H200 Tensor Cor GPUs is aimed at enterprises doing high performance for large AI model training and tuning. That supercomputing system is aimed at LLM builders and AI service provider. It is expected to be generally available in the summer.
HPE will be “time to market” to support the Nvidia Blackwell, GB200 NVL 72 and NVL 2, Rubin and Vera architectures with select models available with direct liquid cooling, said MacDonald.
“As the energy and thermal density of these (GPU) accelerator innovations continue to increase, the decades of experience that HPE has built in direct liquid cooling are incredibly relevant for the future of AI platforms in the enterprise.”
As HPE continues to evolve its Nvidia AI Computing by HPE portfolio, HPE’s direct liquid cooling will be critical in delivering the “highest levels of of efficiency,” said MacDonald.
A Choice Of Pay-Per-Use GreenLake IaaS Or A Capital Expenditure Model
The HPE Private Cloud AI solution can be purchased as a pay-per-use consumption or a capital expenditure solution.
The HPE Private Cloud AI solution delivered as a IaaS (Infrastructure as a Service) model pays partners up to 20 percent commission.
Of course, the biggest margin is the consulting and services that come with building AI solutions for enterprise customers.
Seibold, for his part sees the Nvidia AI Computing by HPE model opening up a new AI-as-a-service market with whopping 40 percent margins.
“With AI we are entering a new world,” he said. “No one can say whether (their AI solution) will double or triple in a year. So the customer needs to start small. We provide an as-a-service model so the customer does not need to invest hundreds of millions of dollars without knowing if that is the end game. So customers can start small with HPE GreenLake and pay for what they use. It is fully integrated and managed platform. This is entering a new game!”
HPE OpsRamp AI Actionable Insights For The End To End Nvidia Accelerated Computing Stack
As part of the Nvidia partnership, HPE is unveiling what Russo singled out as a “game changing” Operations Copilot that transforms data from the Nvidia stack into actionable insights with a conversational assistant.
“This innovation really turbocharges productivity and efficiency for the IT operations team in ways that people have only dreamed of,” said Russo.
The OpsRamp AI operations management software now is providing actionable insights for the end-to-end accelerated computing stack including Nvidia Nim and AI software, Nvidia Tensor Core GPUs and AI clusters along with Nvidia Quantum InfiniBand and Spectrum ethernet switches.
With OpsRamp on HPE GreenLake, partners and IT managers can now “gain valuable insights, identify anomalies and monitor their AI infrastructure across hybrid and multivendor environments through a conversational assistant which is a capability unparalleled in the market,” said Russo.
Besides utilizing Nvidia’s accelerated computing platform to analyze large datasets for insights with a conversational assistant, OpsRamp copilot will also be integrated with security powerhouse CrowdStrike’s APIs so customers can see a unified service map view of endpoint security across their entire infrastructure and applications.