Intel Cancels Falcon Shores AI Chip To Focus On ‘Rack-Scale Solution’
Interim Intel co-CEO Michelle Johnston Holthaus discloses the pivot in its AI data center strategy as Nvidia forges ahead with rack-scale solutions based on the rival’s Blackwell GPU architecture.
Intel said it no longer plans to sell its next-generation Falcon Shores AI accelerator chip that was due later this year so that it can focus on developing a “system-level solution at rack scale” with a successor chip it’s calling Jaguar Shores.
The news was delivered by Michelle Johnston Holthaus, interim co-CEO and CEO of Intel Products, during the semiconductor giant’s fourth-quarter earnings call on Thursday.
[Related: Outrun By Nvidia, Intel Pitches Gaudi 3 Chips For Cost-Effective AI Systems]
Intel had previously teased Falcon Shores for a late 2025 launch as the successor to Gaudi 3, the AI accelerator chip it launched last fall with OEMs such as Dell Technologies and Hewlett-Packard Enterprise. But while Gaudi 3 and previous generations were application-specific integrated circuits (ASICs), Falcon Shores was slated by the company as a programmable GPU that incorporated Gaudi technology.
However, Holthaus gave a blunt assessment of Intel’s AI accelerator chip strategy at an investor conference in December, saying that Gaudi will not help Intel “get to the masses” and suggesting that Falcon Shores wouldn’t be a game-changer.
“Many of you heard me temper expectations on Falcon Shores last month,” she said on Thursday’s earnings call.
Now, Holthaus said, Intel plans to use Falcon Shores for internal testing only.
“This will support our efforts to develop a system-level solution at rack scale with Jaguar Shores to address the AI data center,” she said. “More broadly, as I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably lower the cost and increase the efficiency of compute.”
Holthaus said she arrived at the decision—which she called one of her first major moves as the CEO of Intel Products and interim co-CEO after former CEO Pat Gelsinger abruptly departed in early December—after receiving feedback from customers on what Intel needs to “be competitive and to deliver the right product.”
She said the decision was also influenced by discussions with Intel’s AI accelerator chip team on where they were “from a competitive perspective and from an execution perspective.”
“One of the things that we've learned from Gaudi is it's not enough to just deliver the silicon. We need to be able to deliver a complete rack-scale solution, and that's what we're going to be able to be able to do with Jaguar Shores,” Holthaus said.
Intel Pivots To Where Nvidia Is Focused
Intel disclosed the pivot in its AI data center strategy as Nvidia forges ahead with rack-scale solutions based on its Blackwell GPU architecture.
Revealed at Nvidia’s GTC event last year, the Grace Blackwell GB200 NVL72 rack-scale server platform has been pitched as the foundation for Nvidia AI offerings from major OEM and cloud computing partners, including Dell Technologies, Amazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud.
Nvidia has previously said that the GB200 NVL72 platform as a “multi-node, liquid-cooled, rack-scale system for the most compute-intensive workloads,” with each rack consisting of 36 Grace CPUs and 72 Blackwell GPUs connected over the NVLink chip-to-chip interconnect.
In the company’s November earnings call, Nvidia CEO Jensen Huang said the engineering does with OEM and cloud computing partners is “rather complicated,” responding to a question about a report on Blackwell GPUs overheating in GB200 NVL72 systems.
“The reason for that is because although we build full stack and full infrastructure, we disaggregate all of the AI supercomputer and we integrate it into all of the custom data centers and architectures around the world,” he said at the time.
“That integration process is something we’ve done several generations now. We’ve very good at it, but there’s still a lot of engineering that happens at this point,” Huang added.
The Nvidia CEO and founder was referring to the fact that the AI computing giant has been selling complete AI systems powered by its GPUs and other components for several years, with the company moving from smaller appliances to supercomputer clusters more recently.
Partners Weren’t Feeling High On Intel’s AI Chip Strategy
Intel’s AI strategy pivot was made after some of the chipmaker’s channel partners told CRN earlier this month that it would take the company a long time to create formidable competition to Nvidia’s AI chip dominance in data centers.
Before Gelsinger abruptly retired from Intel in early December, the former CEO said during the company’s third-quarter earnings call last October that it would “not achieve our target of $500 million in revenue” for Gaudi chips in 2024, blaming slower-than-expected sales on the “product transition” from Gaudi 2 to Gaudi 3 as well as “software ease of use.”
Meanwhile, Nvidia is expected to finish its 2025 fiscal year, which mostly lines up with the 2024 calendar year, with $128.6 billion in revenue, most of which is expected to come from its data center GPUs and associated systems.
“Nvidia has got such a head start that it's just a long road,” said Chris Bogan, vice president of sales at Houston-based systems integrator Mark III Systems.
The lack of Intel’s leadership in AI chips for data centers led one solution provider with a U.S. presence to choose AMD and not Intel for its AI center of excellence (COE) program, according to the partner alliance manager who manages vendors for the program.
AMD has made more aggressive claims against Nvidia’s data center GPUs with its Instinct chips, saying in October that its forthcoming Instinct MI325X can outperform Nvidia’s H200 that launched last year. The rival has also announced an accelerated road map that will see it release new GPUs every year, similar to Nvidia’s revamped strategy.
“I don't have customers asking for [Gaudi]. What I do see is customers asking [to kick] tires on a GPU-as-a-service provider that might have AMD MI300X GPUs [or the like],” said the partner alliance manager, who asked to not be identified because he was not authorized to speak on behalf of his company regarding this matter.
For Intel to gain ground in the AI chip market, the partner alliance manager said the company will need to find ways to convince partners to invest time into new products like Gaudi.
“It requires extensive knowledge and training. That is literally competing with the same hours in the day that we're leveraging to actually train and build up knowledge and teams around Nvidia. So it’s not just that you're competing for market share and awareness, but you're also competing for our time,” he said.
