Analysis: Intel Data Center Boss Justin Hotard Leaves After CEO, AI Chip Shake-Ups
Justin Hotard, the head of Intel’s Data Center and AI Group, is leaving the chipmaker after roughly a year on the job to become Nokia’s next CEO. The move is happening after the semiconductor giant dealt with big shakeups to its leadership team and AI chip strategy.
The head of Intel’s Data Center and AI Group is leaving the chipmaker after taking the job a year ago to become the leader of global telecom firm Nokia—a move that happened not long after Intel dealt with big shakeups to its leadership team and AI chip strategy.
Nokia announced Monday morning that Justin Hotard, who became executive vice president and general manager of Intel’s Data Center and AI Group (DCAI) a year ago this month, will succeed its current leader, Pekka Lundmark, as president and CEO on April 1.
[Related: Analysis: Intel Seeks To ‘Stabilize’ Waning Server CPU Share, But It Faces AMD And Other Challenges]
In a LinkedIn post disclosing his appointment as Nokia’s next CEO and president, Hotard, a Hewlett Packard Enterprise veteran, offered Intel a brief farewell.
“I want to thank my team at Intel for their work in stabilizing the Datacenter and AI Business over the last year. I wish them continued success as they continue their journey,” he wrote.
An Intel spokesperson told CRN that Karin Eibschitz Segal—an Israel-based Intel veteran who has served as vice president of the company’s Design Engineering Group, among other roles—will serve as DCAI’s interim leader.
“We have a strong DCAI team that will continue to advance our priorities in service to our customers,” the representative said in a statement.
“Karin Eibschitz Segal has been appointed interim head of the DCAI business and is an accomplished executive with nearly two decades of Intel leadership experience spanning products, systems and infrastructure roles,” the Intel spokesperson added. “We are grateful for Justin Hotard’s contributions and wish him the best in his new role.”
Departure Comes After CEO Shake-Up, DCAI Reassessment
Hotard’s departure came a couple months after Michelle Johnston Holthaus, formerly the head of Intel’s Client Computing Business, was named one of Intel’s interim co-CEOs alongside its CFO, David Zinsner. Intel’s board made the appointments after it reportedly forced the company’s former CEO, Pat Gelsinger, to retire in early December amid mounting financial struggles and questions about his comeback plan.
At the time, Holthaus was also given the newly created title of CEO of Intel Products, which gives her control over the company’s three core chip design businesses: the Data Center and AI Group, the Client Computing Group and the Network and Edge Group.
A little more than a week after taking the new roles, Holthaus made her first public comments about the chipmaker’s strategy as interim co-CEO and Intel Products CEO, indicating that she was more confident about the company’s current path in the PC market than its efforts with AI accelerator chips or server CPUs.
“On the data center side, we have a lot of work to do there. But on the client side, our say-do ratio for the last four years has been very good,” she said at an investor event on Dec. 12.
For DCAI, Holthaus said, she will be “laser-focused” on winning back customers who turned to AMD for server CPUs and added that she would be open to road map changes.
Holthaus also offered a blunt assessment of Intel’s AI accelerator chip strategy, which also falls under DCAI, saying that its accelerator chip brand, Gaudi, “does not allow me to get to the masses.” This came after Gelsinger disclosed in Intel’s November earnings call that the company would not meet the modest, $500 revenue target it set for Gaudi chip sales in 2024.
Looking to Intel’s next-generation AI chip, code-named Falcon Shores, Holthaus tempered expectations by saying that it would not be “wonderful” but rather a “good first step.”
More than a month later during Intel’s fourth-quarter earnings call in late January, Holthaus disclosed that the company no longer plans to sell Falcon Shores, which was due at the end of 2025, so that it can focus on developing a “system-level solution at rack scale” with a successor chip it’s calling Jaguar Shores.
She also disclosed that Intel’s next-gen Xeon CPU with efficient cores, Clearwater Forest, had been delayed to the first half of 2026 from the original 2025 timeframe the chipmaker had given. Despite the setback, Holthaus said, “this year is all about improving” the competitive position of its Xeon CPU products “as we fight to close the gap to competition.”
But while Holthaus said Intel’s Xeon 6 CPUs with performance cores, code-named Granite Rapids have “been a good first step” after launching last year, she noted that the Xeon 6 CPUs with efficient cores weren’t getting the traction Intel had hoped for.
“What we’ve seen is that’s more of a niche market, and we haven’t seen volume materialize there as fast as we expected,” she said.
Hotard Faced Rising Competition On Multiple Fronts
During Hotard’s stint as the head of Intel’s Data Center and AI Group, the business faced growing competition on multiple fronts.
In the server CPU market, AMD continues to put pressure on Intel, growing its market share for x86-based products in the segment to 24.2 percent against Intel in last year’s third quarter. This, along with AMD’s early success in the AI accelerator chip market, allowed the competitor to make nearly as much data center revenue as Intel did last year.
Intel is also facing a threat in the form of Arm, whose instruction set architecture that has dominated smartphone chips is now the basis for custom server CPUs designed by Amazon Web Services, Microsoft Azure and Google Cloud for their own infrastructure. In fact, AWS said in December that more than half of the new CPU capacity it brought online over the last two years was based on its Arm-based Graviton chips.
The greater threat, however, has been the rapid rise of Nvidia in the data center, where it has commanded a significantly larger chunk of spending than Intel. This is the result of Nvidia’s early investments in accelerated computing combined with an ongoing boom in AI development that has kept demand high for its expensive but powerful GPUs and related products.
According to a previous CRN analysis, Nvidia is expected to finish its 2025 fiscal year, which ended in January, with $128.6 billion in revenue, most of which will come from the company’s data center segment. By contrast, Intel ended its most recent fiscal year with less than half: $53.1 billion, less than a quarter of which came from data center sales.
Intel has also had to deal with more successful AI chip efforts from Nvidia competitors ranging from AMD, which finished last year with more than $5 billion in sales from its Instinct data center GPUs, to Broadcom, which grew AI revenue from custom chip engagements and Ethernet products by 220 percent to $12.2 billion last year.
Hotard Helped Define Gaudi 3 AI Chip Strategy
When Hotard became the head of DCAI a year ago, one of his responsibilities was to help define the strategy for the Gaudi 3 AI accelerator chip, which launched in late 2024.
Whereas the predecessor chip, Gaudi 2, received little to no support from OEMs and cloud service providers, Gaudi 3 was set to go in servers from several major OEMs, including Dell Technologies, HPE and Lenovo. It would also get support from IBM Cloud.
The issue was that Gaudi 3 was not as powerful as Nvidia’s H100 GPU, which came out in 2022, and Nvidia had since turned its focus to a more capable successor called the H200 that launched last year and the next-gen Blackwell GPUs that debuted in early 2025.
At an Intel data center event last September, Hotard said his team arrived at a strategy to focus on Gaudi 3’s “price performance advantage,” largely for inferencing smaller, task-based AI models and open-source models, after talking with customers.
“We feel like where we are with the product, the customers that are engaged, the problems we're solving, that's our swim lane. The bet is that the market will open up in that space, and there'll be a bunch of people building their own inferencing solutions,” he said.
