13 New AI-Focused Technologies For Nvidia AI Ecosystem At Nvidia GTC 2025

While the Nvidia GTC 2025 conference is centered on the advances Nvidia has made in GPUs, DPUs and other areas, it has evolved to be a top gathering to look at advances in AI and related technologies and so is the go-to for other vendors to show how they are advancing the AI ecosystem.

This week’s Nvidia GTC 2025 conference in San Jose, Calif., is the latest in an annual conference that started out as a way for Nvidia to highlight its tech prowess in GPUs and other technologies but is now a leading conference for AI in general.

This is because not only does Nvidia CEO Jensen Huang bring some of the biggest names in the IT industry to share his keynotes to discuss their AI initiatives and their partnerships with his company, but also because of the large number of companies that bring their latest AI-related offerings to booths in the sprawling solution pavilion.

These vendors run the gamut from the largest server, storage, networking and AI PC companies to small storage startups, all of which hope to catch the attention of solution providers and businesses looking to learn how they can transform their business to take advantage of AI.

[Related: Cisco, Nvidia Partner On Joint Architecture For The Creation Of AI Data Centers]

The offerings include Dell AI Factory with Nvidia advances, HPE Private Cloud AI, Supermicro rack-scale offerings built on Nvidia’s HGX B200 and AI servers from MSI. Other companies like Vultr, Anaconda, Penguin Solutions, DeepTempo, illumex, Balbix, DataStax, H2O.ai and DataRobot introduced a variety of hardware and software technologies for advancing AI.

Storage vendors had a major presence at Nvidia GTC 2025 too, including Graid Technology, Kioxia, Pure Storage, Cohesity, IBM, DDN, Hitachi Vantara, Pliops, MinIO, NetApp, Vast Data and Solidigm were there showing their latest AI-focused hardware and software. CRN also has highlighted new storage offerings from GTC 2025.

Here’s a look at some of the hottest AI-focused products taking the spotlight at the GTC 2025.

Anaconda + Nvidia GPU-Powered Jupyter Notebooks

Anaconda, Austin, Texas, has expanded its partnership with Nvidia to further accelerate data and AI processing by distributing CUDA libraries as part of the Anaconda Platform for enterprises and improving accessibility to GPUs in Jupyter Notebook web-based development environments. Connecting Jupyter users from local or on-premises workflows to cloud or enterprise GPU resources through an intuitive interface allows teams of all technical proficiency levels to develop, test and optimize AI and machine learning workflows on high-performance computing infrastructure without the traditional complexity of resource management. These GPU-powered Notebooks offer a unified environment where enterprises and developers can innovate AI apps, leverage the latest CUDA libraries, and maintain security and compliance guidelines.

Penguin Solutions’ ICE ClusterWare Software Platform And ICE ClusterWare AIM Service

Fremont, Calif.-based Penguin Solutions’ ICE ClusterWare is a hardware-agnostic software and service offering that manages, scales and optimizes AI, high-performance computing and data infrastructure. With multitenancy support, streamlined workflows and enhanced controls, it helps enterprises build Intelligent Compute Environments, which the company said are fully optimized AI ecosystems that scale seamlessly. The ICE ClusterWare software platform combines open-source, industry-standard and third-party tools with Penguin Solutions’ software to enable IT business-specific AI solutions. The ICE ClusterWare AIM service delivers automated remediation, prescriptive maintenance and operational efficiency at scale to maximize cluster performance and uptime above the typical current industry standard of around 50 percent.

DeepTempo Tempo

The Tempo cybersecurity early warning system from San Francisco-based DeepTempo is available as a Snowflake Native App in the Snowflake Marketplace. It is based on a foundation Log Language Model (LLGM) built and trained by DeepTempo to understand predictably structured or semi-structured data such as log files, detecting anomalies in network traffic and improving threat detection, isolation and remediation. Tempo’s fine-tuning capabilities let organizations adapt models to their specific environments to provide improved accuracy and relevance in detecting threats. Users pay for the enhanced protection and threat isolation from their Snowflake account and Tempo runs within their environment.

illumex Omni

illumex Omni from New York-based illumex is an intelligent conversational interface that helps modernize business data access. Already integrated into Slack, it was added to Microsoft Teams in February to let users explore data and analytics in plain English with fully explainable, traceable and governance-compliant responses. Omni’s deep understanding of business semantics helps ensure accurate metadata mapping and contextual responses. Powered by illumex’s Generative Semantic Fabric (GSF), it translates user prompts into SQL queries or other runtime environments, seamlessly orchestrating context across structured data sources and LLMs. This delivers instant, deterministic insight without hallucinations, enhancing enterprise data accessibility and governance.

Supermicro Nvidia HGX B200 Rack-Scale Solutions

San Jose, Calif.-based Supermicro's new Nvidia HGX B200 8-GPU 10U systems use next-generation liquid-cooling and air-cooling technology. Available in 42U, 48U or 52U configurations, with 64 Nvidia Blackwell GPUs in a 42U rack and to 12 systems with 96 Nvidia Blackwell GPUs in a 52U rack. The air-cooled Nvidia HGX B200 systems feature a redesigned chassis with expanded thermal headroom to accommodate eight 1,000W TDP Blackwell GPUs. Four of the new 10U air-cooled systems can be installed and fully integrated in a rack with the same density as the previous generation, while providing up to 15X increased inference and 3X increased training performance.

Balbix BIX GenAI Cybersecurity Assistant

BIX is San Jose, Calif.-based Balbix’s GenAI assistant for cyber risk and exposure management. Security teams can ask BIX about critical vulnerabilities or exposures, then take action or generate reports. Instead of manually analyzing vulnerabilities, creating dashboards and adding context, BIX automates those tasks through natural-language queries. It generates charts, graphs and trend analyses while cross-referencing external data such as newly published CVEs and regulations. Users can share a single conversation and hand off tasks without losing context. The result is faster remediation and reduced workload for security and IT teams.

DataStax Astra DB Hybrid Search

Astra DB Hybrid Search from Santa Clara, Calif.-based DataStax enhances retrieval-augmented generation (RAG) systems. This feature combines vector search (which focuses on semantic understanding) and lexical search (which ensures exact keyword matching) to improve search relevance by 45 percent, the company said. Accelerated by the Nvidia NeMo Retriever reranking microservices, part of Nvidia AI Enterprise, Astra DB Hybrid Search leverages AI-driven reordering of search results using LLMs to improve accuracy and relevance. Hybrid search aims to improve search, recommendations and personalization in AI applications, helping customers achieve higher precision in their results, which is important for deploying enterprise-level AI systems effectively.

HPE Private Cloud AI

Spring, Texas-based Hewlett Packard Enterprise used Nvidia GTC 2025 to expand the HPE Private Cloud AI, its turnkey AI platform with a new developer system providing an instant AI development environment to prove and validate projects faster and more efficiently. The end-to-end AI software platform now includes rapid deployment of pre-validated Nvidia blueprints, seamless edge-to-cloud-data access from HPE Data Fabric, comprehensive GPU optimization with HPE OpsRamp and new AI services. HPE Private Cloud AI's growing ecosystem of validated partners, called Unleash AI, was also expanded to include the CrewAI Python framework to help deliver prevalidated multi-agent automation seamlessly on the platform.

Dell AI Factory With Nvidia

Round Rock, Texas-based Dell Technologies unveiled end-to-end advancements to the Dell AI Factory with Nvidia to simplify AI deployment for enterprises and help accelerate AI innovation at any scale. Updates include additions to the Dell Pro Max AI PC portfolio, new Dell PowerEdge servers with improved GPU capacity, and the launch of the Dell AI Data Platform with Nvidia, which combines Dell enterprise storage with Nvidia accelerated computing, networking and AI software for continuous data processing and robust data management services for AI deployment. The Dell AI Factory with Nvidia is also adding new solutions and services to power and streamline AI deployments.

H2O.ai Enterprise LLM Studio

H2O.ai, a Mountain View, Calif.-based developer of open-source GenAI and Predictive AI platforms, used Nvidia GTC 2025 to unveil its H2O Enterprise LLM Studio running on Dell infrastructure. This new offering provides fine-tuning-as-a-service for businesses to securely train, test, evaluate and deploy domain-specific AI models at scale using their own data. Built by experts in the Kaggle data science and machine learning community, Enterprise LLM Studio automates the LLM life cycle from data generation and curation to fine-tuning, evaluation and deployment. It supports open-source, reasoning and multimodal LLMs such as DeepSeek, Llama, Qwen, H2O Danube, and H2OVL Mississippi. By distilling and fine-tuning these models, H2O.ai customers can see reduced costs and improved inference speeds.

MSI Nvidia MGX-Based AI Servers

Taiwan-based server manufacturer MSI showcased its new 4U and 2U AI platforms built on the Nvidia MGX reference architecture. Designed to meet the diverse demands of AI, high-performance computing and data-intensive applications, MSI’s MGX-based AI servers deliver scalable performance and resilience and are aimed at enterprise and cloud data centers, leveraging modular, building-block features to optimize the strengths of AI workloads and drive next-level high-performance computing. The new 4U CG480-S5063 server targets advanced AI workloads including LLMs, deep learning training and complex data analytics and features dual Intel Xeon 6 processors. The 2U CG290-S3063 2U AI servers are aimed at AI and high-performance computing workloads with a single-socket Intel Xeon 6 processor.

DataRobot Enterprise AI Suite x Nvidia AI Enterprise

Cloud customers can now use Boston-based DataRobot’s Enterprise AI Suite, now fully integrated and preinstalled with Nvidia AI Enterprise, to speed up agentic AI development and delivery. The integration includes a new gallery of Nvidia NIM and Nvidia NeMo frameworks, including the new Nvidia Llama Nemotron Reasoning models. This unified offering delivers a fully validated AI stack with ready-to-use building blocks and built-in security, support and scalability. By simplifying and streamlining development, monitoring and governance, it unlocks AI agents that can be rapidly deployed into production, helping to reduce time and overhead costs.

Vultr Cloud GPU Accelerated By Nvidia HGX B200

Privately held West Palm Beach, Fla.-based cloud infrastructure company Vultr used Nvidia GTC 2025 to announce that it is among the first cloud providers to enable early access to the Nvidia HGX B200. Vultr Cloud GPU, accelerated by Nvidia HGX B200, will provide training and inference support for enterprises looking to scale AI-native applications via Vultr’s 32 cloud data center regions worldwide.

Close