CEO Antonio Neri On Nvidia Integration Differences Between HPE’s Private Cloud AI And Dell’s AI Factory

“Depending on the use case, you buy small, medium, large and extra-large versus the Dell AI factory, which is a bunch of widgets that you have to bring together,” said Neri in an interview with CRN at HPE Discover. “And then we actually build this in our factory right away. So we ship it exactly as you decide.”

Hewlett Packard Enterprise CEO Antonio Neri told CRN that the new Private Cloud AI HPE is bringing to market with Nvidia is a “fully integrated” private cloud that can be deployed in just three clicks and 24 seconds versus “the bunch of widgets” that make up the Dell Technologies AI Factory with Nvidia.

“Depending on the use case, you buy small, medium, large and extra-large versus the Dell AI factory, which is a bunch of widgets that you have to bring together,” said Neri in an interview with CRN at HPE Discover. “And then we actually build this in our factory right away. So we ship it exactly as you decide. And the only thing we need to do is hook up the power and the network, and you are good to go.”

During HPE Discover in the first ever keynote at the Sphere in Las Vegas, Neri and Nvidia CEO Jensen Huang introduced the new Nvidia AI Computing by HPE portfolio, including HPE Private Cloud AI, before 15,000 customers and partners.

“This is really the big deal that we have integrated everything into a really easy-to-use Private Cloud AI solution,” said Huang during the keynote. “All of the complexity that goes into all the technology has been provided to you, hidden from you, and it is helping you operate it over time. Remember, you are not buying a computer that you are using. You are buying a service. And you can own a service that you are operating.”

Huang’s comments come just one month after he joined Dell Technologies CEO Michael Dell on stage at Dell Technologies World in Las Vegas to unveil an expansion of the Dell AI Factory with Nvidia portfolio to include new server, edge, workstation offerings and services.

Neri called Dell AI Factory with Nvidia an AI framework versus a “fully integrated” Nvidia AI Computing by HPE private cloud. “They offer some of the solutions, but you have to bring it all together,” he said. “You as a customer have to decide what you want, which ingredients you want, and then stitch it all together and deploy it yourself.”

HPE and its partners with the HPE GreenLake-powered Private Cloud AI are providing a private cloud service that is fully integrated and ready to deploy, said Neri. “That is why we had to work with Nvidia together to drive true deep integration of their stack with our stack,” he said. “Otherwise, it's just another reference architecture of sorts.”

For channel partners, the HPE Private Cloud AI reduces the sales cycle “dramatically” to bring AI solutions to customers. What’s more, Neri said, it means that partners are selling compute, storage, networking and software all together with their own services. “Therefore, they get more margin, but do it in a much shorter sales cycle,” he said.

What is going to be the HPE Private Cloud AI impact on partners and customers in the enterprise market?

From a customer point of view, in the enterprise this removes the barriers to entry into AI by focusing on the experience, less on the technology.

Although it comes with the best technology, obviously, it is accelerating deployment. And you know, when I think about that, obviously the enterprise customers have been worrying about so many things [with AI]: skills, capabilities and the complexity of the technology.

This is all about deploying faster so they can drive a generative AI Industrial Revolution inside their business. And this is a very simple offer. It is one product number, three clicks, 24 seconds to deploy. It is as simple as that. And it is tuned for the specific AI workloads whether it's inferencing, [Retrieval-Augmented Generation], fine tuning or small language training.

That is why we had to work with Nvidia together to drive true deep integration of their stack with our stack. Otherwise, it's just another reference architecture of sorts. This is one product. It is an HPE Private Cloud AI. But it is both products fully integrated. It is not a meet-in-the-channel, none of that. And so what that means for the channel is that whatever customer you pick, you can now go sell one product and get the accelerated sales motion, which obviously today has been a very long sales cycle.

Now, if they know what the use case is, it is just one offer. It’s pretty quick. The sales cycle gets reduced dramatically, which means that they're selling compute, storage, networking, software, and they can attach their own services on top of it. And therefore they get more margin, but do it in a much shorter sales cycle.

And so the partners now become way more relevant than ever before, because on the sovereign clouds or with the hyperscalers, they didn’t have a play. But their bread and butter is the enterprise. That is why I have always said our focus is the experience and how we make the partners more relevant with our innovation. This is amazing innovation for them to grow.

Is there any way to size the cost and time to value for the customers?

It’s going to be pretty significant. We're going to learn as we go through this. But many of the partners, as you know, have already sold GreenLake. In fact, it is one of the fastest routes to market that we have. Partners should go back to those customers [that have bought GreenLake] and sell [HPE Private Cloud] AI. Right away. Right away. They don't have to do anything. Just tell the customer: ‘Tell me the use case, and I will help you deploy it faster.’

You can see it on the [Discover] show floor. The solution is so simple. It is absolutely so simple. So the IT Operations time to deploy is 24 seconds. I mean, that's how fast it is.

From a data scientist perspective, there is a huge productivity benefit because they have all the tools right there. They have the large language model. They have the NIM which is the conversational [Generative AI service ] for the data. The way Jensen [Huang ] describes it is now that can talk to your data and get the insights from that data.

So the container environment is there. Everything is there. The data pipeline automation is there. Everything is in one place.

So it is productivity at an IT level. It is productivity at the developer level. And it is a faster return for the enterprise by deploying it at the speed of the business. And for the channel, the sale cycle is short.

What is the difference between the Dell AI Factory and the HPE Private Cloud AI?

Dell has what I call a framework. They offer some of the solutions, but you have to bring it all together. You as a customer have to decide what you want, which ingredients you want, and then stitch it all together and deploy it yourself.

We, with our service partners and our channel partners, we give it to you all ready, fully integrated. It is just click and deploy. And that is because of our HPE GreenLake platform. Because we designed this for a cloud-native architecture from a console operating model perspective, using the cloud experience, and everything else was fully co-engineered and fully integrated into that rack.

Depending on the use case, you buy small, medium, large and extra-large versus the Dell AI factory, which is a bunch of widgets that you have to bring together. And then we actually build this in our factory right away. So we ship it exactly as you decide. And the only thing we need to do is hook up the power and the network, and you are good to go.

What are you seeing in terms of customer adoption of private cloud, and what is your vision for the future of private cloud?

No. 1 is customers have become way more sophisticated and understand that with specific workloads, which are data intensive, it is way more cost-efficient to run them on-premises.

There is no need to move data around. You reduce the risk on cyber, and then, honestly, the cost of egressing data back and forth goes away. By default, it is already significantly cheaper. You don't have to move data. It is already there.

Second is that because of the challenge we see today in the virtualization aspect of this, customers now are saying, ‘Do I move this stuff to the public cloud or do I keep it on-prem and move it to containers or bare metal or an alternative hypervisor?’

That is why another big important announcement is what [HPE Executive Vice President and CTO ] Fidelma [Russo] said today, which is we are going to offer an alternative virtualization hypervisor layer. But the trick is, No. 1, we have inside GreenLake an orchestration layer, which means you can go at the pace and speed you want, meaning I can manage your VMware environment as I do today. You can start moving to other run times if you want to do that. You can move some of your workloads to the open-source KVM, which has been hardened for the enterprise with the features you need and fully integrated with cluster management.

It is one thing to have a hypervisor, but you need to be able to orchestrate across and orchestrate down. We have orchestration across because that is the orchestration layer inside GreenLake, and now we integrated the KVM, orchestrating down to the cluster management while you manage your VMware.

At the same time, we manage bare metal and containers in addition to the run times, whether it is OpenShift or the like. So we are giving customers control. We give them flexibility, and we give them choice. Those are the big three elements.

And as you go to AI, knowing that you need to manage the public cloud instance where your data sometimes is, we give you the hybrid experience because, in the end, you need a hybrid strategy to deploy AI. You can’t just do it in a monolithic approach because AI is a distributed workload and data lives everywhere. But you want to control the data.

That is why when I think about our strategy, you may have a private cloud today for virtualization we can move that to open-source KVM. You have containers. You may have bare metal. Now we added one more thing: it is called AI. It is a private cloud AI stack. That is it. But all of that is managed consistently through the HPE GreenLake experience. It doesn’t matter what it is. You just deploy at the speed of the business. And this AI thing [HPE Private Cloud AI] obviously was designed for simplicity.

Is HPE going to offer more private clouds?

Yes. We have optimized for workloads like VDI, SAP, databases and alike. So what that means is it is going to be way more standardized for customers. But for partners what it means is that they still sell compute, storage and networking, but it is all fully integrated. Instead of selling a server here and storage there and this and that, you now sell the whole thing integrated together, which means more margins.

How important are those SAP and VDI private clouds you are bringing to market?

As AI is a workload, VDI is a workload and SAP is a workload. The configuration and optimization is slightly different, but in the end it is one product. So SAP is obviously a memory-driven workload. You need a lot of memory versus AI, where you may need more accelerated computing.

With VDI you have other characteristics that need to be optimized. What we do is optimize all of that for you in small, medium or large recipes. So you pick your size. So if you need a VDI for 400 users, then you look at what type of VDI. Is it low-latency VDI like training, for example, where you need graphical user interfaces versus others that may be just a standard console to do data entry? For full visualization, you may need GPUs to be integrated too. So we optimize all of that.

What we see are more private cloud instances. And for the partner it is great because they sell to the workload, which makes you more relevant, and they sell the whole stack which means you can attach more services.

Will we see dozens of private clouds coming from HPE?

I think you are going to have as many as you want, but obviously you want to make sure you focus on the key workloads. There are hundreds of workloads. When we did our analysis there are 17 to 20 that make a big difference to [customers].

You made a deal with CrowdStrike and integrated that into HPE Private Cloud AI. What does that mean for partners and customers?

We integrated their APIs to the Private Cloud stack for AI, which means that all these end points where we bring the data to the private cloud are monitored constantly. So you have this map [on which] you can see where things are coming from.

Ultimately, as I said on stage, it is not just the simplicity of the stack or the simplicity of HPE GreenLake to deploy the stack. It is the observability aspect of it. Today we observe servers. We observe storage. We observe networking. Now we observe the full stack. And in doing so [HPE ] OpsRamp plays a big role. Now we can observe the AI stack with Nvidia GPUs. But with that, guardrails and security have to come all together, so we integrated with the CrowdStrike APIs so we can also monitor all these end points.

What is the difference between what HPE does with OpsRamp versus what Cisco does with its observability?

They observe the network, but we observe everything. So we observe the network, the storage, the compute and everything in between.

Fundamentally we have the monitoring aspect, the telemetry collection aspect and it was CPU-centric. Now it is GPU-centric too. So we can observe multivendor, meaning Dell, Cisco, Lenovo, NetApp, and Pure [Storage].

So if you have an IT multivendor environment, we can, through our HPE GreenLake observability, which is powered by OpsRamp, we can monitor all of that.

So you may end up in an environment where you have HPE GreenLake deploying HPE infrastructure, but now we can observe the rest of the infrastructure that comes with it, whatever widgets you bought in the past.

Now we extend that to GPUs. But in addition we integrate it through our language model in [OpsRamp] Copilot so that we can have a conversational engagement with our Copliot based on the data, which is the telemetry we collect. So you can ask questions like, ‘What is going on with my compute farm? What is going on with my storage farm? Where are the risks?’

So those are the two big [OpsRamp] things—the extension to the Nvidia stack and the extension with the use of Generative AI with large language models so we have conversational elements with the data.

The third piece is that OpsRamp also feeds that data into our HPE GreenLake sustainability dashboard, which basically gives you the ability to visualize where your carbon footprint is, and you can automate as many decisions as you want.

AI will play an even bigger role there because, as we did with [manufacturer] Relimetrics [a Germany-based provider of AI manufacturing automation] with a fail or pass [AI quality control process using HPE and Nvidia technology], you can, say, move this workload if it goes above this cost and above this carbon footprint to here. You can automate that versus today, most of the customers want to be in the process versus eliminating the human. But I would say we bring the AI to the human, not the human to AI.

HPE has the full compute, networking and storage for AI versus Cisco and Dell. What is the difference versus Dell and Cisco in terms of the AI infrastructure for the future?

I think HPE has the broadest portfolio in the industry to deliver an integrated architecture that is specific for both hybrid or cloud native and AI native. The network is the core foundation to do that.

Even when you think about the HPE Private Cloud AI announcement today, you have the Nvidia switches and the NVLink, and then you have the cable out. That cable needs to be connected to your data center networking. Today there are only three vendors [there]: Cisco, Arista and Juniper [Networks].

We have integrated all of this inside GreenLake. When we finish the acquisition of Juniper, we will offer our customers one console for everything. Next to that there may be another private cloud or a bunch of servers and storage that are connected to the network.

So we give customers an integrated architecture and one way to manage their entire operations. As this goes through ethernet then we actually can route traffic the appropriate way, manage telemetry and all of that.

That is why this [HPE] Juniper acquisition [which is due to be closed at the end of this year or early next year] is important to help collapse the stack and to be able to manage operations in a much more efficient way. By the way, it is an AI-driven way.

Can you talk about the channel enablement strategy and the excitement that Jensen Huang spoke about with regard to the HPE channel ecosystem, including the global system integrators that have signed on for HPE Private Cloud AI?

I think Nvidia’s limitation in the enterprise is the reach. It is one thing to reach tens of customers, which you can do direct. But once you go to hundreds of thousands and millions, you need the channel.

More than 70 percent of our business at HPE goes through the channel, in some cases over 90 percent depending on which [geographic region] and country you are talking about. That coverage is golden.

So we both committed to sales-enable the channel, my direct sales force, his very, very small direct sales force, and show up as one company. On one end you have the global Sis, which are the five we announced—HCLTech, Infosys, Wipro, TCS, Deloitte. For us, they play more of a consultative role with business process optimization. As they go through the digitization and automation of that process, now they can integrate AI into that digitization and then technologically deploy one of the stacks we announced.

When we go through the channel, you know and we live it, it is more about now I need to deploy it, run it and sell it. That basically allows us to go much faster. Much faster now. Because it is now a simple offering with an easy button. And then keep doing what you are doing with GreenLake. We don’t give you another thing to sell. We give it to you as an extension of what you are already doing. That is the objective. That is why what we did today was to make the channel relevant in AI. That is the headline. This announcement makes the channel relevant in AI for the first time.

What is the call to action for partners?

Get aboard. You are already selling servers, storage and private cloud. Now you can sell Private Cloud For AI. Keep selling GreenLake, because the more customers you bring to GreenLake, the easier it is to cross-sell. You are already selling Aruba. You are already selling ProLiant. You are selling Alletra [MP]. And now you are selling private cloud for AI. It comes all together with the same experience. It is just a different instance.