Supercomputer 2024: Baker’s Dozen Of Hot Storage Products
The Supercomputer 2024 conference brought together a wide range of AI and high-performance computing hardware and software, with innovations in storage technology taking a big part of the spotlight.
Storage Tech For A Supercomputing AI, HPC World
The Supercomputing Conference 2024, also known as the International Conference for High Performance Computing, Networking, Storage and Analysis conference, or SC24 for short, last week highlighted a wide range of technologies aimed at helping build the fastest high-performance computing infrastructure.
The compute, networking and storage technologies spotlighted during SC24 came at a good time for the IT industry, which is coalescing quickly around the need to build new high-performance infrastructure to meet AI and GenAI requirements.
This is especially important for the data storage industry. Before businesses can take advantage of AI and GenAI, they need to organize or deploy their vast data stores to make sure that data can be presented to the new AI workloads.
[Related: Storage 100: The Digital Bridge Between The Cloud And On-Premises Worlds]
It is a business that will only grow. Research company Precedence Research in July estimated the global AI-powered storage market will hit $217.3 billion by 2033, up from $28.7 billion in 2024 as enterprises look to better ways to manage their data for AI use.
Visitors to SC24 had the chance to see high-performance storage hardware systems that were either based on GPUs or worked with GPU-powered servers to prepare data for AI use, software for managing the data and software that connects storage to AI systems.
There is a lot innovation in the storage industry to meet AI and GenAI requirements. For more information, read on to see a baker’s dozen of storage technologies helping make AI more ubiquitous.
Vdura VeLO
The latest Vdura Data Platform from San Jose, Calif.-based Vdura is a major modernization on the previous releases with a move to a fully parallel, microservices-based architecture, a new flash-optimized metadata engine and an enhanced object storage layer. Together, these enhancements increase the storage system’s efficiency, scalability and durability across on-premises, cloud and hybrid environments, the company said.
Vdura also introduced VeLO, a new key-value store used within the platform's director layer for handling small files and metadata operations. It supports a diverse range of AI workloads. It is optimized for flash and delivers up to 2 million IOPS per instance. The system supports what Vdura said is an infinitely scalable number of VeLO instances in the same global namespace.
Komprise Intelligent Data Management
Komprise, Campbell, Calif., in October enhanced its Intelligent Data Management platform for unstructured data management with an improved Storage Insights data explorer that allows users to drill into their data stores for analytics on size, hot vs. cold data and other metrics. Users can now browse both directories and files within a single navigation pane and see data-centric and storage-centric metrics in one place. Komprise now automates the creation of destination shares to reduce manual effort in large-scale migrations. Komprise Smart Data Workflows allows users to scan datasets for any text keyword or custom expression and have the results tagged in the Komprise Global File Index.
Supermicro Petascale Storage Server with Nvidia Grace Superchip
San Jose, Calif.-based Supermicro used SC24 to introduce what it termed the industry’s first storage server using the Nvidia Grace Superchip. The Supermicro Petascale ARS-121L-NE316R is a 1U all-flash storage server using Nvidia’s dual-die Grace CPU with 144 Arm cores and up to 960 GB of LPDDR5X on-package memory. The system features 16 hot-swap E3.S NVMe bays and supports Nvidia ConnectX-7 NICs and BlueField-3 DPUs. Supermicro said the system has twice the performance per watt of traditional servers and the integrated high-bandwidth LPDDR5X memory with 1 TBps of memory bandwidth to increase the performance for software-defined storage applications. Software partner Weka showcased the system as part of its SC24 presence.
Domino Volumes For NetApp Ontap
San Francisco-based Domino Data Lab’s Domino Volumes for NetApp Ontap (DVNO) allows enterprises using NetApp Ontap and Domino to rapidly access and leverage their NetApp data anywhere across any environment, without DevOps overhead to help reduce their cost and processing times by up to 50 percent, based on use cases, through efficient data management, the company said. One-click creation and management capabilities in Domino enable users to create, share and control access to storage volumes powered by NetApp Ontap that either run on-premises on NetApp storage systems or on Amazon Web Services, Google Cloud or Microsoft Azure.
Hammerspace Global Data Platform Software v5.1
The latest version of the Hammerspace Global Data Platform software from San Mateo, Calif.-based Hammerspace introduces Tier 0 storage to help improve GPU computing and storage efficiency. It transforms local NVMe storage on GPU servers into ultra-fast, persistent shared storage while seamlessly integrating it into the platform. Tier 0 delivers data directly to GPUs at local NVMe speeds. By activating otherwise “stranded” NVMe storage, this technology helps enable faster, more efficient workflows, particularly for data-intensive tasks. “Tier 0 sets a new standard in GPU computing, bridging the gap between storage speed and accessibility,” the company said.
HPE Cray Supercomputing Storage Systems E2000
The Hewlett Packard Enterprise E2000 is a high-performance storage system designed for large-scale supercomputers. It more than doubles the input/output (I/O) performance compared with the previous generation, HPE said. The E2000 is based on the open- source Lustre file system and helps improve the utilization of both CPU-based and GPU-based compute nodes by reducing idle time during I/O operations. It is scheduled to be in general availability in early 2025.
Quantum DXi9200
The DXi9200 is the latest generation of San Jose, Calif.-based Quantum's flagship DXi9000 Series hybrid (flash + dense disk) data protection appliances, designed for scalable, efficient backup and recovery services for large organizations. It is targeted at helping organizations get ahead of the continuing threat of ransomware attacks with a comprehensive and proactive approach to securing data and copies, continuously validating recovery operations and quickly recovering in case of attack. Features such as highly optimized data reduction, replication and cloud tiering, plus all-inclusive software, capacity-on-demand licensing and flexible subscription service options help meet those challenges while lowering costs and increasing IT efficiency, the company said.
DDN A3I Data Platform
For Chatsworth, Calif.-based DDN, SC24 was an opportunity to highlight two recent moves related to AI and high-performance computing. The company unveiled big upgrades to its A3I data platform, including a new model, the AI400X3 which was given a 60 percent boost for AI and HPC applications. DDN A3I data intelligence platforms use data utilization, AI and advanced analytics to increase GPU infrastructure productivity. The company also said that it is supporting xAI’s Project Colossus supercomputer that starts at 100,000 Nvidia GPUs with plans to scale to 200,000 GPUs for AI workloads.
Weka Storage Offering For Nvidia Grace CPU Superchip
At SC24, Campbell, Calif.-based Weka introduced what it called the industry’s first high-performance storage offering for the Nvidia Grace CPU Superchip. It combines Weka’s AI-native data platform with Supermicro's new Petascale storage server powered by Arm Neoverse V2 cores. It accelerates AI and HPC workloads with up to 10X faster AI model training and improves GPU stack efficiency by up to 50X, all while lowering energy consumption and footprint, Weka said. The technology is slated to be available early next year.
Qumulo’s Cloud Native Qumulo
Qumulo used SC24 to unveil the general availability of Cloud Native Qumulo (CNQ) on AWS and Microsoft Azure, calling it the world’s first cloud-native unstructured data system. The Seattle-based company also said Cloud Native Qumulo on AWS achieved over 1-TBps throughput and over 1 million IOPS using standard Network File System clients. A fully cloud-native offering, CNQ can help disrupt the economics of cloud workflows with pricing of up to 80 percent lower than legacy cloud-based file offerings while providing flexibility to meet the performance and capacity needs of virtually any file-based application, Qumulo said.
NetApp Joins the Vultr Cloud Alliance
Privately held cloud computing platform Vultr said at SC24 that storage and cloud technology developer NetApp has joined the Vultr Cloud Alliance, a partnership program that brings together industry-leading technology for building composable AI cloud services. Customers of West Palm Beach, Fla.-based Vultr’s global network of cloud data centers can now take advantage of NetApp’s enterprise-grade data management features while leveraging Vultr’s predictable pricing and high-performance infrastructure. Vultr said the combined offering is aimed at organizations in data-intensive industries looking for specific optimizations for AI model training, high-performance computing and research workflows.
Pure Storage GenAI Pod and FlashBlade//S500 with Nvidia DGX SuperPOD certification
Santa Clara, Calif.-based Pure Storage’s GenAI Pod is a full-stack AI system that allows organizations to accelerate AI-powered innovation and deployment. The GenAI Pod’s one-click deployment and streamlined day-2 operations help simplify complex setups while reducing the time, cost and specialty technical skills required to deploy GenAI projects, the company said. Initial applications for the GenAI Pod include drug discovery, trade research, investment analysis and RAG with agentic frameworks for semantic search, knowledge management and chatbots. In addition, Pure Storage’s FlashBlade//S500 Ethernet storage is now certified with Nvidia DGX SuperPOD to help deliver high-performance, energy-efficient storage for large-scale AI deployments.
Hitachi iQ With HGX
Hitachi Vantara, Santa Clara, Calif., used SC24 to introduce Hitachi iQ with Nvidia HGX, an end-to-end technology stack of scalable infrastructure to help saturate GPUs with data to decrease the time needed to gain insight from data. It offers Nvidia H100 and H200 Tensor Core GPUs, as well as the Nvidia AI enterprise cloud-native software platform tailored for high-performance AI workloads for developing and deploying GenAI applications. Also included is the updated Hitachi Content Software for File platform with generation 5 PCIe, AMD EPYC processor technology, and high-performance networking with Nvidia ConnectX-7 400-Gbps InfiniBand or Ethernet interface cards. Also featured is high-density object storage integrated with Hitachi Content Software for File to scale compute and storage independently depending on data sizes, data types and workloads.