AI And Nvidia GPU Provider Lambda Raises $480M To Boost LLM Innovation, Cloud Platform Development

“We’ll build more software tools that delight AI developers and deploy more GPUs to meet the massive customer demand,” said Lambda CEO Stephen Balaban regarding his company’s new $480 million investment from Nvidia and others.

AI GPU and cloud standout Lambda has raised $480 million in a Series D funding round, with investment from Nvidia and private equity firms, with the goal of fueling innovation around Lambda’s Cloud Platform and Lambda Chat, which hosts DeepSeek’s R-1 AI model.

“We’ll build more software tools that delight AI developers and deploy more GPUs to meet the massive customer demand,” said Lambda CEO and co-founder Stephen Balaban (pictured) in a blog post regarding his new $480 million investment plans. “The AI revolution is in full swing, and Lambda is here to power it.”

In February 2024, the company raised $320 million during its Series C funding round.

[Related: AWS, Microsoft, Google Fight For $90B Q4 2024 Cloud Market Share]

Lambda is a top Nvidia processor provider who made CRN’s 20 Hottest AI Cloud Companies of 2024 list.

The San Jose, Calif.-based AI company won Nvidia’s AI Excellence Partner of the Year award in 2024.

Balaban said the new $480 million investment will help Lambda scale both infrastructure and software—including Lambda’s Cloud Platform—as well as enable AI developers to train, fine-tune, and deploy models faster and easier than ever before.

“We’ll also continue to develop Lambda Chat, which hosts DeepSeek-R1 and many other open-source models,” the Lambda CEO said.

Lambda says it offers the world’s least expensive GPU cloud for AI training and inference. For example, Lambda offers Nvidia H100s and 3,200 Gbps of InfiniBand for $1.89 per hour.

Lambda’s $480 Million Investment

The Series D investment round was co-led by Andra Capital and SGW with participation from new investors Nvidia, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners and others.

Lambda’s portfolio spans on-premises GPU hardware to hosted GPUs in the cloud. Lambda’s hardware and private cloud business serves over 5,000 customers in industries such as manufacturing, healthcare, pharmaceuticals, financial services, and the U.S. government.

As a top Nvidia processor provider, Lambda offers fast access to the latest GPUs and architectures for training, fine-tuning and inferencing of generative AI and large language models (LLMs).

“Over the last twelve months, AI has become more democratized because of two forces: open source and LLM reasoning. We saw dozens of high-quality open models, including Llama, DeepSeek-R1, and Mochi 1,” said Balaban. “These models put state-of-the-art AI in the hands of every person, company, and research institution. Open source accelerates progress across the field.”

Balaban said open-source reasoning models allow anybody to contribute to the progress of AI and no company is better positioned than Lambda to provide compute capabilities.

Lambda Innovation History

Founded in 2012, Lambda quickly launched an internal GPU cloud to support AI image editing products.

By 2017, the company introduced Lambda Quad Deep Leaning GPU workstation and the Lambda Blade GPU server, dubbing it the world’s first plug-and-play deep learning supercomputer under $20,000.

In 2018, Lambda launched the Lambda GPU Cloud and Stack, an AI software repository.

By 2024, the company unveiled its 1-Click Clusters that provides AI developers instant access to Nvidia H100 Tensor Core GPU clusters with Nvidia InfiniBand networking.

“We envision a future where the impact of AI is so substantial that it’s used daily by every man, woman, and child on the planet,” Lambda CEO said. “A world of one person, one GPU.”

Close