Dell Chief AI Officer Jeff Boudreau: ‘Right Now, There’s A Big Learning Curve, And That’s An Opportunity For Our Partner Community’
“It’s a community of practice. It’s just shared learnings. We’re all kind of going, ‘It’s early innings.’ We’re all bumping elbows and skinning knees. It’s like, ’Let’s learn together, share some of this pain, but share some of the learnings,” newly appointed Dell Chief AI Officer Jeff Boudreau tells CRN.
Dell Technologies holds the No. 1 market-share position in servers and storage, the underpinnings of enterprise generative AI solutions, which places newly appointed Chief AI Officer Jeff Boudreau in the catbird seat when it comes to evaluating the best use cases for the technology.
Inside Dell alone there were 800 AI business notions generated after founder, Chairman and CEO Michael Dell “challenged teams to ideate” around them in 2023, Boudreau said. Nine months ago, Boudreau was placed in charge of Dell’s AI efforts, a job that began with sorting through the company’s own ideas for how best to use the technology.
“I ran a process where we put all the use cases through a framework that was important to us. For us, we were looking at business growth. We were looking at business productivity. We were looking for customer experience,” Boudreau told CRN in an interview during Dell Technologies World 2024. “Those were kind of the three. We went from 800 use cases down to four domain areas. We went to 36 use cases and three really simple AI practices underneath it.”
Finding the highest and best business use for AI as well as supplying the massive compute and storage to them—and doing all of that quickly—is the top priority at Dell, Boudreau said. This was underlined by the company’s earnings call after Dell Technologies World, when Vice Chairman and COO Jeff Clarke said the company wants to win AI customers even if it comes at the expense of where Wall Street would like to see its margins on those deals.
Boudreau is leading that charge for Dell.
“This is our No. 1 priority,” he said. “For me, being a geek, it’s fun.”
Boudreau was president and general manager of Dell EMC storage until 2019, and then became president and general manager of Dell’s Infrastructure Solutions Group.
“I have a simple algorithm for a successful AI journey. It’s data, plus process-mapping and re-entry on your technology stack. If you do that together, you’re going to have a successful journey. If you do one without the other, [you’re] probably not going to be so happy. We thought it was important to get the strategy right first.”
[RELATED: Dell And Nvidia Say GenAI’s Data Byproducts Give Partners 'Massive' Storage Opportunity]
Another prong in the strategy includes Dell’s resellers and integrators. Boudreau said there is plenty of room on the “AI Rocket” as Michael Dell calls it, for them to win deals.
“There’s a huge opportunity for our partners in both the consultative side and the technology side. From the consultancy side, so think about the data prep phase. Can I find the data? Can I clean the data? Can I connect the data? Can I put the right access controls in place for security and privacy and things like that?” Boudreau told CRN.
Learning from the leaders at Dell who are executing in the AI space is a critical part of delivering that technology to customers, said Paola Doebel, executive vice president at Downers Grove, Ill.-based Ensono, a Dell Platinum partner.
“What innovation is out there? Who’s on it? Who’s testing? And what’s the use case? So anytime a company like Dell is at the forefront of talking about that, talking about use cases, talking about where clients are going, at a minimum, we need to understand that or we can help our clients even imagine getting there,” she told CRN.
Gary McConnell, CEO of Nanuet, N.Y.-based Dell Platinum partner VirtuIT, said to that end, the Dell AI Factory is one of the first products he has seen that gives partners the ability to begin an earnest conversation with midsize to enterprise customers about building AI locally. Dell is helping this along by hosting VirtuIT’s techs at “lab days” inside Dell facilities to get hands-on with the smartest uses of technology, McConnell said.
“We’ve heard a ton of AI coming down the road map, whether it’s in these beefed-up servers with all this compute coming out so that they can handle these AI workloads. But we haven’t fully realized what an actual use case looks like at the partner and customer level at least in the medium-business space, which we play very well in,” McConnell said. “The Dell AI Factory was the first time we were able to say, ‘Hey, this is something we can put actual solutions around and help customers.’ The Dell AI Factory is an actual use case for the partner community and, more importantly, customers. That's probably what I'm most excited about.”
C.R. Howdyshell, president and CEO of Cleveland-based Dell Titanium partner Advizex, a Fulcrum IT Partners company, has closed two large deals for cloud service providers building out AI data centers using large quantities of Dell compute and AMD processors.
“Everybody wants to talk about it and get prepared, but I don’t see these enterprise customers today spending $20 million for AI. I think that’s where the service providers that we’re working with are seeing their opportunity for growth. And both of the deals we’ve done, they're forecasting 2X to 3X more in the next 12 to 18 months.”
Boudreau said the Dell AI Factory is built to meet needs of every size like the one Dell is assembling to power Grok for X to the PC running a local model to enhance small business.
“For how small you can go, because the way the architecture is going, we’re able to bring LLMs right to the laptop, to the personal computer,” he said. “So you can go from an individual developer on their device locally to actually run AI and do AI things that make them more efficient and more effective, but that can scale all the way up, from edge locations to core data centers to even into the cloud and the CSPs so the scale is pretty wide.”
Here is more from CRN’s interview with Boudreau at Dell Technologies World.
You’re in charge of AI for Dell. I can’t imagine at this moment what that must be like.
For me personally, it’s really exciting. I come from an engineering background anyway. I had been with legacy EMC for many years doing a lot of midrange storage. Up until the last five years, I ran ISG [Infrastructure Solutions Group] here, so I ran the largest engineering group in the company, most of the software engineers in the company, so this is right in my wheelhouse. We’ve been working with AI in our products for a long time. So you can think about the technology within PowerMax. Traditional AI, we’ve been doing that a long time, so it was just natural.
It was maybe June last summerish, when Jeff Clarke approached me. He and Michael [Dell] had an idea that this AI thing is going to be really big and we want to have focused and dedicated leadership.
When they came to me with the opportunity, we walked through what that would look like. That’s how it all started. We announced it at the financial analyst summit, saying, ‘This is our No. 1 priority.’
For me, being a geek, it’s fun.
I’m hearing more from the company about a framework or a path for partners and customers to follow. There’s more for partners to latch on to and say ,‘Here’s where we start.’ How difficult was that to come up with? That seems like 90 percent of the game, just coming up with simple steps.
It absolutely was. And just getting organized around your thinking. I have a simple algorithm for a successful AI journey. So it’s data plus process-mapping and re-entry on your technology stack. If you do that together you’re going to have a successful journey. If you do one without the other, [you’re] probably not going to be so happy. We thought it was important to get the strategy right first.
So Jeff, Michael, we’ve been working through where we were to where we’re going. So the last eight months, since I’ve been on the road, we’ve learned a lot. A key element of that was just the refinement of the strategy, so we evolved to accelerate the adoption of AI.
Last year when we were here we were talking about, ‘How do we simplify AI and move AI to the data?’ This one is about acceleration. The reason for that is every customer, every partner that I talk to is like, ‘How do we get started?’
They see the massive opportunity; they see it as an inflection point. They see the good it can do. A lot of them are challenged on how to get started. Then that’s the strategic framework we put in place that we wanted to think through.
So we did the four elements.
‘AI-In,’ which is about how do you embed AI, simplistically, how do we embed AI into our technologies to make them more intelligent? And we’ve been doing that for a long time. It’s not just GenAI. It’s AIOps. It’s event correlations. It’s automation, it’s a whole bunch of things.
Additionally, it was about ‘AI-On,’ which was about how do we build world-class infrastructure from the device, like the PC, all the way up to the high-end data center, where people could land their AI workloads on it, whether it’s an application, LLM or what have you, but then actually make them more productive in their environment.
You can see the things that we have launched here this week, but also see what we’ve done in the past with Nvidia. The ‘AI-For’ was selfishly for ourselves. How do we use AI internally to make Dell a more effective, more efficient operation? To grow our businesses, optimize our businesses, or just provide better experiences to our customers and our partners?
With that, that has become the biggest conversation I’ve had with customers and partners. As Dell has been ‘customer zero’ for the last eight months, what have we learned? The good, the bad, the ugly? What’s worked well? So understand what your strategy is. What are the business outcomes you are trying to derive? What are the use cases that are the most important? Everybody loves to do everything, and you can’t. You have to focus. What’s the right architecture to support that? Do I even have the data to make that happen? It’s really bringing that together.
Lastly it was ‘AI-With,’ the fourth element of it. Why that’s so important is it has really helped bring in the whole partner ecosystem. We believe, I believe, that this is such a big inflection point. Probably the biggest I’ve ever seen in my 30-year career.
It will take a village to make this happen, from the data-prep phase to the process mapping and re-engineering, to the technology phase. If you think of it from a customer-journey point of view, from ideation to scale, there’s multiple steps on that journey. You’re going to have a consultative track, and you’re going to have a technology track.
The reason for that is if my recipe is data, process and technology, I need help all the way through that journey to bring this together. It takes an open ecosystem to pull that off. If you saw Jeff Clarke’s last slide, it’s the journey map of a customer or our thinking on how to bring this all together—from the services capabilities and the consultative, to where the partner ecosystem fits in, to the technology stack.
There’s a huge opportunity for our partners in both the consultative side and the technology side. From the consultancy side, so think about the data prep phase, can I find the data, can I clean the data, can I connect the data, can I put the right access controls in place for security and privacy and things like that?
Then it’s like, ‘OK, help me understand my business processes. Now that I have the data set clean, do I even have the business processes to do that?’ Then ‘OK, what are the supported use cases?’
So they can help them on that journey to really understand that environment and get that mapped out. Then work with us and other partners to build that technology stack.
That stack could be anywhere from architecting it. Building it with the best-of-breed components. It could be running it as a managed service, then it’s all the wrappers around it.
Right now, there’s a big learning curve. There are concerns around talent. I’ve been talking to partners and customers. That’s an opportunity for our partner community.
It’s a community of practice. It’s just shared learnings. We’re all kind of going , ‘It’s early innings.’ We’re all bumping elbows and skinning knees. It’s like, ’Let’s learn together, share some of this pain, but share some of the learnings.’
One partner I talked with said they were looking at the Dell AI Factory as sort of T-shirt sizing for their customers: small, medium and large. How small can they start to think? I know this isn’t for the dentist or the florist.
A lot of people think of the AI Factory as this one thing in the data center. To me, it’s actually a new modern data center infused with AI throughout it. That can start from a client device all the way up to a high-end data center. So what you saw Jeff Clarke unpack [on stage] is a modern data center that can support your AI needs, and it’s infused with AI all the way through: the compute nodes used to analyze large amounts of data, the storage you need to store and stream, to the networking to create that connective tissue, that data mesh as data now becomes your product as a company.
All these pieces together, it’s like running your body. It’s a system. You need everything connected and running well if you want it to come together. It’s the same analogy here.
For how small you can go, because the way the architecture is going we’re able to bring LLMs right to the laptop, to the personal computer. So you can go from an individual developer on their device locally to actually run AI and do AI things that make them more efficient and more effective, but that can scale all the way up, from edge locations to core data centers to even into the cloud and the CSPs, so the scale is pretty wide.
How significant are the bottlenecks? Whether it’s compute, whether it’s power, is there a day in, say, 2029 where you go, we just can’t make enough storage? Or there’s not enough power?
Being a traditional storage guy for 30 years, I hope I never see that day. In regard to, ‘Are there barriers to entry?’ the answer is absolutely ‘Yes.’
Where I think they’re going is, right now, there’s been challenges to supplies of silicon. So that’s been the bottleneck now. Like any good process, once you fix one bottleneck it moves to another spot.
So what’s the next spot? Is it data? Now I have the right data to feed the engine of my rocket—my compute node—do I have the data to fuel it? Ok, I got that. Then it’s, ‘OK, do I have the talent to pull this off?’
Tactically, there are things you can do today to free up space and capacity. One is around the infrastructure. A PowerEdge server five years ago compared to the same PowerEdge server today for the exact same set of applications uses 50 percent less power and cooling and 60 percent less density. So there’s an opportunity to create space whether it’s a client device, if it’s a server, if it’s a storage rig.
You saw what we did with PowerStore. It’s 5-to-1 compression efficiency. Why wouldn’t you be using that? You have five times the data in the same envelope.
There’s a tactical move to say, ‘OK, what can I do to drive a tech refresh? To clean up space? That buys me time so that I can do my ideation, experimentation, POCs, start figuring out the right use cases to focus on to learn, grow and scale.’
When you go to the masses, you have to start thinking long term. I think it’s two-pronged. Tactically, free up space. Do some refreshing in the environment. Create space for me to start my AI journey.
Have you had anyone come to you about a use case and you’ve said, ‘No. We’re not going to do that?’
Yes, multiple. I have to joke and say I’m not as popular as I once was in the company. As part of the new role, I’ve been doing the use case prioritization. For context for you, if I back up to eight months ago when I took this role, about a year before that Michael Dell had challenged Dell employees to ideate, experiment, and [do proofs of concept around AI]. Our teams went off and did it like crazy.
When Jeff asked me to make sense of what was going on—was there a clear business outcome, is it going to have a material impact, lock up the framework—there were 800 use cases running. I will tell you not a lot of them had a good business case or were tied to a specific outcome. There was a lot of playing. A lot of them were redundant. We had different teams doing different RAG implementations with different vector databases and it was just like, ‘Why?’
So I shifted, put a process in place, put a governance model in place. Governance had two pieces. One was AI projects which includes things like quality, business outcome, security, risk.
I was also asked to take over the data strategy for inside the company: So AI and data go together. As part of that same governance model, there’s also a data element of that as well. So a whole data governance and data management practice is associated with that.
So I ran a process where we put all the use cases through a framework that was important to us. For us, we were looking at business growth. We were looking at business productivity. We were looking for customer experience. Those were kind of the three. We went from 800 use cases down to four domain areas. We went to 36 use cases. Three really simple AI practices underneath it.
One is going back from ideation, experimentation and POC to focus, execution and scale. We shifted. Now we’re seeing material impacts and benefits to that. At that point there were a bunch of use cases that I said, ‘No. They’re not aligned to our strategy. They’re not aligned to what we’re doing as a company. Just stop.’ There were probably 200 of those.
There were 200 where there were ideas in the pipeline but people hadn’t really started to do anything, so 400 were off the books right away. Of the rest, they’re good ideas. They were aligned to our strategy and the areas we want to work on, but they probably weren’t as high priority.
So simplistically for services, for ISG, for CSG, I put all the use cases into their backlogs and had them prioritize for their teams so I wasn’t the bad guy. I was the bad guy for the initial list. I made them the bad guy for their teams.
So yes, they gave me the opportunity to say ‘No.’ People weren’t happy with me saying, ‘No,’ but we’re getting through it.
That said, eight months in we’re picking up velocity. We’ve done our four big use cases and we’re seeing material impact. We’ve added two more, so in finance ops and supply chain.
Now I’m creating a space of experimentation again, but controlled experimentation. So our team members can come play a bit more because I kind of shut down the playing as we got our house in order.
Is there an ethical framework, or I’m trying to think of a way to phrase this, like ‘We talked with HuggingFace and they say, ‘This is really important when you are examining use cases,’ or a moral framework?
From a moral compass, I guess to any specific partner or ecosystem, I think it has to be a balanced approach. Everything going on with responsible AI and security is a bit of a double-edged sword because there’s good actors and bad actors.
A lot of that has to start with the company itself. So just use me as customer zero. Us putting a strategy, a framework and governance in place and that governance includes privacy, security, ethics, responsible use of the technology and how we want our teams to [use it].
We just rolled out training called AI Fundamentals for the 125,000 employees that everyone has to take. There’s four,15-minute modules. One of the modules is on how to ethically use, responsibly use technology aligned to Dell’s core beliefs and values. That’s one element.
Then I think there’s an opportunity for the ecosystem. The technology innovators. The Dells, the Metas, the HuggingFaces, for all of us to come together, and have a point of view on how to do this responsibly, ethically, within our technologies.
Then it’s also within our governments. Right now, there has to be balance between what you do in the corporation to get ethical practices and governances in place, what we do as the technology companies putting this tech out to be sure that it’s done responsibly, then also what’s going on with governments and local practices. I’ve participated in panels here in North America. One of the best was with the French government.
The French government took a different approach. They created a panel. They want to gather all the learnings and take a balanced approach. So they had their thought leaders in France. They had the Hugging Face founders, they had the Mistral founders and they brought myself in, they brought the Meta teams in, and we all shared knowledge, best practices, points of view. They took that and shaped a body of work they gave back to the government that said, ‘This is how we should be doing it here.’ It has a good balance between policy and innovation. So they can both thrive and do it in an ethical and responsible way. Now, they’re trying to influence the EU’s policy.
How important is it to bring the data home? Dell talks a lot about the security of using these models and training them on-premises.
You want to keep your core IP close to you, whatever your business is, whatever is important to you. For Dell, we have a set of priorities and frameworks, like, ‘This is what’s important to us’ whether it’s around engineering or patents. Maybe it’s around our supply chain and procurement practices. All the things we have built these extraordinary muscles around to be industry leaders, that core IP, I don’t want that outside our four walls.
As soon as that’s in a public model, it’s gone. It’s lost. People can do whatever they want with it. I think you are going to see more and more of what’s mission critical and important to your business come on-prem because of that.
Instead of a large language model, it’s actually going to a smaller model. Mistral or Meta will have small, medium and large, simplistically.
Using a smaller model and feeding it with your data. The model knows nothing about anybody’s business. The public model is like the internet. Half the information is wrong and so there’s a lot of issues with that data. If you keep a smaller model, on-prem, and feed it with high-quality data, things like hallucinations don’t happen, privacy and security issues don’t happen. Deep fakes can’t happen on that data. That’s the huge opportunity that people have.
As people have gone to large language models—I’m going to go build my own and train my own—they’ve realized it’s really about smaller models that are domain-specific. I’m going to feed it with high-quality data. I’m going to do things like inferencing and RAG techniques to really hone in and be smarter with that data. There’s not a lot of change.
We believe in data gravity. Edge, core, cloud, it’s everywhere. Our phones, our Fitbits, whatever you’ve got.