Introduces New Hyperscale NAS Category to Remove Storage Bottlenecks and Provide the High-Performance Data Pipeline Needed for GPU Computing
Hammerspace, the company orchestrating the Next Data Cycle, today unveiled the high-performance NAS architecture needed to address the requirements of broad-based enterprise AI, machine learning and deep learning (AI/ML/DL) initiatives and the widespread rise of GPU computing both on-premises and in the cloud. This new category of storage architecture – Hyperscale NAS – is built on the tenants required for large language model (LLM) training and provides the speed to efficiently power GPU clusters of any size for GenAI, rendering and enterprise high-performance computing.
“Most computing sites are faced with broad workload characteristics needing a storage solution with enterprise features, distance/edge, classical HPC, interactive, and AI/ML/data analytics capabilities all at large scale. There is a rapidly growing need for a distributed and parallel data storage architecture that covers this broad space so that sites don’t face the inefficiencies of supporting many different solutions,” said Gary Grider, High Performance Computing Division Leader at Los Alamos National Laboratory. “With the recent and near-future developments to the NFS standard, the open source implementation and acceptance into Linux, NFS has the features that enable a storage architecture to service this growing variety and scale of workloads well. We at LANL are pleased to see an industry partner contribute to Linux/Internet standards that address broad needs and scales.”
Legacy NAS Architectures Will Never Meet the Demands of AI Training at Scale
Building and training effective AI models require massive performance to feed the GPU clusters that process the data. The performance requirements are varied, requiring a mix of streaming large files, read-intensive applications and random read-write workloads for checkpointing and scratch space. Traditional scale-out NAS architectures – even all-flash systems – can’t meet these applications’ performance or scale requirements. Delivering consistent performance at this scale has previously only been possible with HPC parallel file systems, which are complex to deploy and manage and don’t meet enterprise requirements.
A Hyperscale NAS architecture provides the best architecture for training effective models, speeding time-to-market and time-to-insight, and ultimately deriving business value from data.
“Enterprises pursuing AI initiatives will encounter challenges with their existing IT infrastructure in terms of the tradeoffs between speed, scale, security and simplicity,” said David Flynn, Hammerspace Founder and CEO. “These organizations require the performance and cost-effective scale of HPC parallel file systems and must meet enterprise requirements for ease of use and data security. Hyperscale NAS is a fundamentally different NAS architecture that allows organizations to use the best of HPC technology without compromising enterprise standards.”
Hyperscale NAS is Proven as the Fastest File System for AI Model Training at Scale
The Hyperscale NAS architecture has now been proven to be the fastest file system in the world for enterprise and web-scale AI training. It is in production with systems built on approximately 1,000 storage nodes, feeding up to 30,000 GPUs at an aggregate performance of 80 Terabits/sec over standard ethernet and TCP/IP.
Hyperscale NAS is Needed for Enterprise GPU Computing at Any Scale
Hyperscale NAS is adapting big tech strategies for business use. Just like Amazon Web Services (AWS) developed S3 for large-scale, efficient storage, becoming a model for object storage in companies, Hyperscale NAS is doing the same. It’s the system used for training large language models (LLMs) and is now being used in businesses for computing with GPUs and training generative AI models. This approach is spreading, bringing advanced tech company methods to companies of all sizes.
The Hammerspace Hyperscale NAS architecture is ideal for both hyperscalers and enterprises as it does not require proprietary client software, efficiently scales to meet the demands of any number of GPUs during training and inference, uses existing Ethernet or InfiniBand networks, existing commodity or third-party storage infrastructure, and has a complete set of data services to meet compliance, security and data governance requirements.
Hammerspace Hyperscale NAS is Certified as NVIDIA GPUDirect Storage
The Hammerspace Hyperscale NAS architecture has completed the GPUDirect Storage Support validation process from NVIDIA. This certification allows organizations to leverage Hammerspace software to unify unstructured data and accelerate data pipelines with NVIDIA’s GPUDirect® family of technologies. By deploying Hammerspace in front of existing storage systems, any storage system can now be presented as GPUDirect Storage via Hammerspace to provide high throughput and low latency performance to keep NVIDIA GPUs fully utilized.
Supporting Quotes
“As enterprises and government organizations increasingly harness the power of AI, the importance of efficiently managing file data across heterogeneous infrastructure has never been more critical. Our partnership with Hammerspace represents a pivotal step forward, enabling Infinidat’s customers to seamlessly incorporate file-based workloads into their trusted InfiniBox enterprise storage environments,” stated Eric Herzog, Chief Marketing Officer at Infinidat. “The introduction of the Hyperscale NAS solution into the Hammerspace Global Data Environment is a testament to their commitment to meeting our customers’ evolving needs. This solution not only complements Infinidat’s unmatched performance, superior data protection, and cyber storage resilience but also integrates a global namespace, ensuring our customers have access to a comprehensive, high-performance data management platform. Together, Infinidat and Hammerspace are setting a new standard for enterprise storage solutions tailored to the demands of the modern data landscape.”
“We have traditionally separated Scale-out File Systems, commonly known as parallel file systems, from NAS precisely due to the nature of their performance for very large HPC/ AI environments. As we enter into this next generation of AI, new technologies, particularly in data infrastructure, are needed,” said Camberley Bates, VP and Practice Lead at The Futurum Group. “Hammerspace is not only bringing distinct data management but is now enhancing in place NAS systems to address this very large-scale environment that will be commonplace for all organizations.”
“Many storage systems involve multiple layers of communication and data transfer. By embedding NFS directly into an Ethernet-attached SSD array, many of these layers are bypassed, resulting in significantly lower latency,” said Thomas Isakovich, CEO at Nimbus Data. “We are excited to work with Hammerspace as we partner to continue to deliver previously unmatched low latency and data path speed to high-performance applications.”
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!