Dell And Nvidia Say GenAI’s Data Byproducts Give Partners 'Massive' Storage Opportunity
'If I give you a paragraph of text and I convert it to embeddings that then get stored, the size of the embeddings is much bigger than the size of the original text,' Manuvir Das, vice president of enterprise computing at Nvidia, tells CRN. 'It can be 10 times bigger. This is a massive data and storage opportunity that people haven’t grasped yet.’
Dell Technologies vice chairman and chief operations officer Jeff Clarke told Dell Technologies World 2024 that if GPUs are the AI system’s brain and networking is its heart, then storage is how AI breathes.
“I’d argue storage is the lungs pumping the data,” Clarke said during his appearance at the show last month.
The compute stacks coming online now are capable of consuming more data than any previous generation, making the systems critical to its overall performance, said CR Howdyshell, CEO of Dell Titanium partner Advizex, who has recently completed deals with customers that will create massive AI compute with Dell servers. However, those systems need high quality storage to be effective.
“We get enthralled by and begin focusing on all the sizing for compute and AI requirements. You have to make sure you’re getting ahead of the storage conversation,” he told CRN. “The customers building these systems are very technically savvy. And if you don't get ahead of that conversation, you'll be outside looking in. They may not buy Dell storage, right? We have to make sure that we're talking and sizing compute, and then having a storage conversation. But the opportunity is just huge.”
Dell announced changes to its go-to-market around storage in August of last year calling the new program Partner First For Storage. Dell’s core sellers now get better incentives when they close storage deals through a channel partner.
This has led to new customers for Titanium partners like Advizex as well as Platinum partners like Nanuet, N.Y.-based VirtuIT. Dell is focused on winning deals in mid-range storage where it sees a total addressable market of $13 billion.
VirtuIT’s John Lee said the opportunity today with small to medium enterprises is in a data and estate consultations that gears the organization for generative AI once it becomes more widely adopted.
“I think there’s a lot of opportunity to go refresh old infrastructure,” said Lee, chief technology officer at VirtuIT. “Let’s get you set up so when you are ready to really start implementing generative AI, that you have the infrastructure you need from a storage point of view, that is ready to catch up with your compute.”
On stage at last month’s Dell Technologies World, Dell’s President of Infrastructure Solutions Group Arthur Lewis said the storage opportunity is “simply massive.”
“And to put some context around ‘simply massive,’ AI workloads drive 300-times the amount of data throughput that we see in traditional compute -- 300 times the amount of data throughput,” he said repeating the last part. “And with the ubiquity of AI, the demands on storage are only going to grow.”
Clarke predicts that by the end of this decade, the AI machines demanding more from storage will require 27 quettaflops of computing power -- or 27,000,000,000,000,000,000,000,000,000,000 floating point operations per second.
One person helping to create that future compute and deploy it to the largest organizations around the globe is Manuvir Das, Nvidia’s vice president of enterprise computing.
He told CRN that even now AI models are moving beyond reading text files for training and inferencing, and have moved on to reading richer and more complex multi-modal forms of data.
“That’s a lot of data we’re talking about,” Das said. “The whole ecosystem is about to unlock really powerful models to talk to your videos. Here is a 30-minute video. Tell me who is in the video? Who is arguing with who? What game was being played? All this extraction of what’s going on in the video, we have models for this now.”
The market hasn’t grasped AI’s implications to storage, according to Das, who was the senior vice president of product engineering, unstructured data storage when he left EMC for Nvidia in 2019.
“Two secrets about the data that I don’t think people have perceived yet. One is, for all this retrieval to work – ‘Go. Look at a PDF or look at text.’ -- what happens is you create these embeddings and then these embeddings get put into a vector database, which is a new kind of storage,” said Das (pictured).
“If I give you a new kind of text and I convert it to embeddings that then get stored. The size of the embeddings is much bigger than the size of the original text. It can be 10 times bigger. This is a massive data and storage opportunity that people haven’t grasped yet.”
Dell is betting that its reseller partners will begin to see the demand for high end storage just over the ridge, as well as the importance of moving early and quickly on deals in the space.
“Our software-defined PowerScale was absolutely built with AI in mind,” Lewis said. “Driving maximum speed from the GPU to our AI data platform offering incredible performance with our GPU-direct technology and the right mix of flexibility scalability and security. In addition, PowerScale is the very first Ethernet storage to be certified on Nvidia SuperPod.”
On the hardware side, Dell introduced the PowerScale F910, a dense, high-performance file solution for unstructured data. It adds significant hardware upgrades, including DDR5, PCIe Gen5, as well as 24 NVMe SSDs, in a 2U rack platform.
Lewis told Dell Technologies World that it has an even more powerful storage idea on the horizon: Project Lightning coming next year.
“Project Lightning is a game-changing parallel file system built for unstructured storage specifically for AI,” he said. “What do I mean by game changing? When we look at it versus our nearest flash-only scale out competitors, we will see performance increases of 20x and throughput increases of 18 and a half times. Truly game-changing performance.”
As the technology progresses and AI models become tailored for the organization running them the number of parameters they need, and thus the compute requirement to run them will shrink, Lewis told CRN after the presentation, but the need for models to quickly reference the data that is kept in attaches storage will grow.
“If you believe in algorithmic innovation driving smaller domain specific models, the way that AI works is the model sits on the GPU, so a query is made, it goes into the GPU, it hits the model, if the model can respond it responds, if it can’t it has to go fetch data from the attached storage,” Lewis said. “As the models get smaller, the need for the model to fetch information from the attached storage, is going to go up dramatically.”
For customers who hope to train their own data on smaller training smaller models the quality of the attached storage determines the performance of the entire system.
“In a 400-billion parameter model, it’s less necessary,” Lewis said. “But when you start getting down into the 8-billion, 4-billion, 2-billion parameter models, then that attached storage becomes very, very important.”
Howdyshell said among the products unveiled at Dell Technologies World, the one that he is excited to begin conversations about is the new PowerStore with 5 to 1 compression.
“That is huge,” he said of the technology. “In the deals we have done and in the opportunities we are quoting, typically, the customer wants to move fast with compute, and then secondary is storage. It’s a big opportunity, because where there’s obviously data, computing power required in it, customers got to have storage. You got it. You got to have storage, and ultimately, some point back up.”