The 10 Biggest Nvidia News Stories Of 2024
While the AI computing giant spent most of the year fulfilling continuously high demand for AI chips and systems based on its Hopper architecture, it also used 2024 to set the groundwork for products and services it expects to be major sources of revenue growth in the future.
Nvidia is close to finishing what could end up becoming one of the most consequential years in the AI computing giant’s 31-year history so far.
While the Santa Clara, Calif.-based company spent most of the year fulfilling continuously high demand for AI chips and systems based on its Hopper GPU architecture that debuted two years ago, it also used 2024 to set the groundwork for products and services it expects to be major sources of revenue growth in the future.
[Related: 10 Big Nvidia Executive Hires And Departures In 2024]
Most notably, Nvidia in March revealed its next-generation Blackwell GPU architecture, promising major gains in performance for generative AI workloads, and then started shipping systems with partners, including Dell Technologies, near the end of the year.
“Both Hopper and Blackwell systems have certain supply constraints, and the demand for Blackwell is expected to exceed supply for several quarters in fiscal 2026,” Nvidia CFO Colette Kress wrote in her commentary for its third-quarter earnings report in November, referencing the company’s fiscal year that closely lines up with the 2025 calendar year.
But Nvidia also took steps this year to introduce its fast-growing collection of Nvidia Inference Microservices, or NIM for short, which it expects to serve as a significant source of software revenue growth as an extension of the Nvidia AI Enterprise platform.
“We definitely expect that NIM is going to drive the bulk of the demand for Nvidia AI Enterprise going forward,” said Manuvir Das, Nvidia’s vice president of accelerated computing who retired in October, in an interview with CRN in March.
Based on Nvidia’s fourth-quarter revenue forecast, the company is expected to finish its current fiscal year with a staggering $128.6 billion in revenue, which would represent the second year in a row it has more than doubled annual sales.
Nvidia’s dominance in the AI computing market was reflected in a few other ways this year: by disclosing that it earned more annual revenue than Intel for the first time, by replacing Intel in the Dow Jones Industrial Average index, by becoming one of the world’s most valuable companies and by being at the center of many tech vendor announcements.
But there are multiple forces that emerged or gained speed this year that could threaten or diminish Nvidia’s power in the tech industry, whether that’s the plethora of companies developing rival AI chips or regulators in the U.S., European Union and China investigating Nvidia for potential antitrust abuses.
These developments make up the 10 biggest Nvidia news stories of 2024. What follows are the most important things you should know about each development.
10. Nvidia Reveals Extended Data Center Road Map
At Computex 2024 in June, Nvidia revealed plans to release successors to its new Blackwell GPUs in the next two years and launch a second-generation CPU in 2026.
The AI computing giant made the disclosures in an expanded data center road map that provided basic details for next-generation GPUs, CPUs, network switch chips and network interface cards.
The plan to release a new data center GPU architecture every year is part of a one-year release cadence Nvidia unveiled last year, representing an acceleration of the company’s previous strategy of releasing new chips roughly every two years.
“Our basic philosophy is very simple: build the entire data center scale, disaggregate it and sell it to you in parts on a one-year rhythm, and we push everything to the technology limits,” Nvidia founder, President and CEO Jensen Huang (pictured) said in his Computex keynote.
The expanded road map came out less than two months after Nvidia revealed its next-generation Blackwell data center GPU architecture, which started shipping near the end of the year. At its Nvidia GTC event in March, the company said Blackwell will enable up to 30 times greater inference performance and consume 25 times less energy for massive AI models compared with the Hopper architecture, which debuted in 2022 with the H100 GPU.
In the road map revealed by Huang during his Computex keynote, the company outlined a plan to follow up Blackwell in 2025 with an updated architecture called Blackwell Ultra, which the CEO said will be “pushed to the limits.”
In the same time frame, the company is expected to release an updated version of its Spectrum-X800 Ethernet Switch, called the Spectrum Ultra X800.
Then in 2026, Nvidia plans to debut an all-new GPU architecture called Rubin, which will use HBM4 memory. This will coincide with several other new chips, including a follow-up to Nvidia’s Arm-based Grace CPU called Vera.
The company also plans to release in 2026 the NVLink 6 Switch, which will double chip-to-chip bandwidth to 3,600 GBps; the CX9 SuperNIC, which will be capable of 1,600 GBps; and the X1600 generation of InfiniBand and Ethernet switches.
9. Competition Grows Against Nvidia’s AI Chip Dominance
As Nvidia moves forward with its plan to release increasingly powerful AI chips at a faster release cadence, large and small rivals unveiled new products this year that are meant to whittle away at the GPU giant’s AI computing dominance.
At Computex, AMD unveiled plans to release a new Instinct data center GPU, the MI325X, later this year with significantly greater high-bandwidth memory than its MI300X chip or Nvidia’s H200, enabling servers to handle larger generative AI models than before.
The MI325X is set to arrive in systems from Dell, Lenovo, Supermicro, Hewlett Packard Enterprise, Gigabyte, Eviden and several other server vendors starting in the first quarter of next year, according to AMD.
AMD unveiled the details as part of a newly disclosed plan to release a new data center GPU every year starting with the CDNA 3-based MI325X. In an extended road map, AMD said it will then debut in 2025 the MI350 series, which will use its CDNA 4 architecture to provide increased compute performance and “memory leadership.” The next generation, the Instinct MI400 series, will use a future iteration of CDNA architecture and arrive in 2026.
Intel, on the other hand, launched its Gaudi 3 accelerator chip in October with planned support of major vendors like Dell, HPE, Lenovo, Supermicro, Asus and Gigabyte.
While the semiconductor giant said Gaudi 3 is not faster than Nvidia’s 2022 H100 GPU, it offers a “price-performance advantage” that Intel hopes will find traction with businesses that need cost-effective AI systems for training and, to a much greater extent, inferencing smaller, task-based models and open-source models.
According to Intel, Gaudi 3’s eight-chip server platform will have a list price of $125,000, which it said will give Gaudi 3 2.3 times greater performance-per-dollar for inference performance and 90 percent better training throughput than Nvidia’s H100.
Meanwhile, cloud computing giants Amazon Web Services and Google Cloud each made announcements this year around custom AI chips they designed.
Google Cloud, for instance, announced in April the general availability of its TPU v5p accelerator chip, which it said can train large language models nearly three times faster than the previous-generation TPU v4.
AWS in December announced the general availability of cloud instances powered by its Trainium2 chip in addition to revealing its next-generation Trainium3 chip.
The cloud service provider said the new Trainium2-based EC2 instances offer up to 40 percent better price-performance than current GPU-based EC2 instances. It promises that Trainium3 will offer four times greater performance than the previous generation and will become available in instances in late 2025, according to AWS.
There are also a crop of AI chip startups looking to challenge Nvidia, including Cerebras Systems, which revealed its Wafer Scale Engine 3 chip in March; d-matrix, which launched its Corsair PCIe card in November; and Tenstorrent, which plans to introduce its neural processing units to the data center market through a partnership with Moreh.
8. Nvidia Replaces Intel In Dow Index
Nvidia replaced Intel on the Dow Jones Industrial Average on Nov. 8 as the AI computing giant continues to put competitive pressure on the beleaguered chipmaker.
S&P Dow Jones Indices, the organization behind the DJIA, said in a statement the week before that the change is meant “to ensure a more representative exposure to the semiconductor industry.”
While Nvidia’s stock price is up more than 179 percent from the beginning of the year, Intel’s shares have been down roughly 57 percent across the same period.
“The DJIA is a price-weighted index, and thus persistently lower-priced stocks have a minimal impact on the index,” according to S&P.
7. Nvidia Makes Big Software Revenue Push With Microservices
Nvidia used Computex to mark the launch of its inference microservices, which are meant to help speed up the development of generative AI applications for data centers and PCs in addition to creating a new source of revenue growth.
Known officially as Nvidia NIM, the microservices consist of AI models served in optimized containers that developers can integrate within their applications. These containers can include Nvidia software components such as Nvidia Triton Inference Server and Nvidia TensorRT-LLM to optimize inference workloads on its GPUs.
At launch, the microservices included more than 40 first- and third-party AI models, such as Databricks DBRX, Google Gemma, Meta Llama 3, Microsoft Phi-3, Mistral Large, Mixtral 8x22B and Snowflake Arctic.
Nvidia said at the time that nearly 200 technology partners, including Cloudera, Cohesity and NetApp, “are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications.” These apps include things such as copilots, code assistants and digital human avatars.
The company also highlighted that NIM is supported by data center infrastructure software providers VMware, Nutanix, Red Hat and Canonical as well as AI tools and MLOps providers like Amazon SageMaker, Microsoft Azure AI and Domino Data Lab.
Global systems integrators, service delivery partners and other channel partners that have been cited by Nvidia as supporting NIM include Accenture, Deloitte, Infosys, Quantiphi, SoftServe, Tata Consultancy Services, World Wide Technology and Wipro.
NIM is available to businesses through the Nvidia AI Enterprise software suite, which costs $4,500 per GPU, per year. It’s also available to members of the Nvidia Developer Program, who can “access NIM for free for research, development and testing on their preferred infrastructure,” the company said.
In a March interview with CRN, Manuvir Das, who was Nvidia’s vice president of enterprise computing until he retired in October, said he expected NIM to increase demand for Nvidia AI Enterprise “quite dramatically” and serve as the “next scaling factor” for software revenue.
6. Nvidia Reveals Game-Changing Blackwell GPU Architecture
Nvidia revealed its next-generation Blackwell GPU architecture as the much-hyped successor to the AI chip giant’s Hopper platform at its GTC 2024 event.
At the event, the AI computing giant uncovered the first GPU designs to use the Blackwell architecture, which it said comes with “six transformative technologies for accelerated computing” that will “help unlock breakthroughs” in fields like generative AI and data processing, among others.
The first confirmed designs to use Blackwell include the B100 and the B200 GPUs, the successors to the Hopper-based H100 and H200 for x86-based systems, respectively. The B200 is expected to include greater high-bandwidth memory capacity than the B100.
The initial designs also include the GB200 Grace Blackwell Superchip, which, on a single package, connects two B200 GPUs with the company’s Arm-based, 72-core Grace CPU that has previously been paired with the H200 and H100.
In a liquid-cooled system with 18 GB200s, Nvidia said the system’s 36 Blackwell GPUs can provide up to 30 times faster large language model inference performance compared with an air-cooled system with 64 H100 GPUs.
Advancements contributing to this major uplift in performance include a second-generation Transformer Engine, the new four-bit floating point format and a fifth-generation NVLink chip-to-chip interconnect.
Nvidia is supporting these chips with the air-cooled DGX B200 system and the liquid-cooled DGX GB200 system, the latter of which is based on the GB200 NBL72 rack-scale system that contains 36 GB200 Superchips as well as the company’s BlueField-3 DPUs.
To enable high-bandwidth connections between these systems, Nvidia unveiled two new high-speed network platforms that deliver speeds of up to 800 GBps: the Quantum-X800 InfiniBand platform and the Spectrum-X800 platform.
5. Major Tech Vendors Extend Partnerships With Nvidia
Nvidia sought to continue its dominance in the AI computing space by unveiling renewed and extended partnerships with major tech vendors this year.
In the cloud infrastructure market, AWS, Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure and other providers unveiled plans to launch new services based on Nvidia’s Blackwell GPUs and DGX Cloud platform.
For example, AWS said it plans to build Project Ceiba, which the company said will become “one of the world’s fastest AI supercomputers,” by using Nvidia’s GB200 NVL72 systems, which are multi-node, liquid-cooled rack-scale servers that contain 36 of the AI chip giant’s upcoming GB200 Grace Blackwell Superchips as well as the company’s BlueField-3 DPUs.
Among server vendors, HPE unveiled a comprehensive generative AI solution portfolio co-branded as “Nvidia AI Computing By HPE,” Lenovo unveiled Lenovo AI Fast Start and other solutions based on Nvidia technologies, Dell unveiled the Dell AI Factory with Nvidia end-to-end AI enterprise solution, and Cisco Systems unveiled the Nvidia-based Cisco Nexus HyperFabric AI cluster.
Other OEMs that unveiled new Nvidia-based solutions include Supermicro, Gigabyte and Asus.
Elsewhere in the IT industry, Nvidia unveiled new partnerships with Equinix, ServiceNow, SAP, NetApp, Nutanix, IBM, Databricks, Snowflake and many others.
4. Nvidia Partners Start Shipping Blackwell Systems After Reports Of Issues
Major Nvidia partners such as Dell said they started shipping their first systems with Blackwell GPUs in mid-November after the chips and systems reportedly faced delays due to technical issues.
Dell said on Nov. 17 that it had started shipping to customers server racks based on the GB200 NVL7, Nvidia’s rack-scale server platform for GB200 Grace Blackwell Superchips that is also being used by other major OEMs and cloud service providers.
In Nvidia’s earnings call later that week, Huang pointed out Dell’s announcement and said that Blackwell systems are also being stood up by Oracle Cloud Infrastructure, Microsoft Azure and Google Cloud.
Huang made the comments in response to a Nov. 17 report by tech publication The Information, which detailed concerns from a few customers about Blackwell GPUs overheating in servers based on the GB200 NVL72 platform.
In response to the question about The Information’s report on Blackwell GPUs overheating in GB200 NVL72 systems, Huang noted that while Blackwell production is in “full steam” with the company exceeding previous revenue estimates, he said the engineering Nvidia does with OEM and cloud computing partners is “rather complicated.”
“But as you see from all of the systems that are being stood up, Blackwell is in great shape,” he said. “And as we mentioned earlier, the supply and what we're planning to shift this quarter is greater than our previous estimates.”
On the same call, Nvidia CFO Kress said that Nvidia was “on track to exceed” its previous Blackwell revenue estimate of several billion dollars for the fourth quarter, which wraps up at the end of January, as its “visibility into supply continues to increase.”
In August, The Information reported that Nvidia was delaying the release of Blackwell GPUs by three or more months due to issues with the underlying architecture. This was corroborated by a separate report from semiconductor analysis firm SemiAnalysis, which said the main issue was around obstacles Nvidia faced in the implementation of an advanced semiconductor packaging technology in Blackwell.
3. Nvidia Surpasses Intel In Annual Revenue
When Nvidia reported its fourth-quarter earnings in February, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its GPUs driven by generative AI development.
The AI computing giant finished its 2024 fiscal year, which ended Jan. 28, with $60.9 billion in revenue, up 126 percent or more than double from the previous year.
Meanwhile, Intel finished its 2023 fiscal year, which ended in December, with $54.2 billion in sales, down 14 percent from the previous year.
According to CRN analysis in November, Nvidia is expected to finish its 2025 fiscal year this January with a staggering $128.6 billion, which would represent another doubling of the company’s annual revenue for a second year in a row. It would also be 64 percent higher than the combined full-year revenue that has been forecast by Intel and AMD.
Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computing—with a major emphasis on data centers, cloud computing and edge computing—then found itself last year at the center of a massive demand cycle due to hype around generative AI.
Intel, in the meantime, has been far behind Nvidia when it comes to adoption of its accelerator chips by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish. As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate most of the revenue for its Data Center and AI Group, and this area has suffered due to lower demand.
2. Nvidia Faces Growing Antitrust Scrutiny Across The Globe
Nvidia this year faced growing antitrust scrutiny from regulators in the U.S., China and Europe with new investigations into the AI computing giant’s practices.
Most recently, on Monday, China’s State Administration for Market Regulation said that it has launched an investigation into Nvidia for purported violations of the country’s anti-monopoly law and commitments the company made to gain China’s approval for its $7 billion acquisition of Mellanox Technologies in 2020.
An Nvidia spokesperson told Reuters that the company did its best “provide the best products we can in every region and honor our commitments everywhere we do business.” The representative added that Nvidia is “happy to answer any questions” from regulators.
While Reuters reported that China’s antitrust investigation into Nvidia is “widely seen as a retaliatory shot” against the U.S. government’s latest export restrictions on semiconductor products to Chinese companies, Nvidia is nevertheless also facing scrutiny from regulators in the U.S. and European Union on potential anticompetitive practices.
In late October, the European Commission said it was investigating the potential anticompetitive implications of Nvidia’s $700 million deal to acquire Run:ai, an Israel-based company that provides GPU orchestration software. The regulator added that Nvidia would need its approval to complete the deal.
On Dec. 4, Reuters reported that the European Commission was asking Nvidia customers if they were offered or required to buy GPU orchestration software in a bundle with software or other hardware along with whether they viewed such actions as anticompetitive.
Earlier in the year, in June, CNBC and other news publications reported that the U.S. Justice Department had opened an antitrust investigation into Nvidia’s business practices.
1. Nvidia Became World’s Most Valuable Company Multiple Times
Nvidia became the world’s most valuable company at least three times this year, reflecting its elevated status as one of the most critical providers of AI computing infrastructure.
The AI computing giant hit the milestone in June, October and November, surpassing the market capitalizations of Microsoft, Apple and other major companies.
While Nvidia is currently below Apple and Microsoft with the third largest cap in the world, it continues to stay above the $3 trillion mark alongside the iPhone maker and Microsoft.
Nvidia’s stock price is up 179 percent from the beginning of the year.