Hammerspace Sets New Records in MLPerf1.0 Benchmark with New Tier 0 Storage

Hammerspace, the company orchestrating the next data cycle, has unveiled groundbreaking MLPerf®1.0 benchmark results that highlight the unmatched capabilities of its revolutionary Tier 0 technology, a new tier of ultra-fast shared storage that uses the local NVMe storage in GPU servers. Designed to eliminate storage bottlenecks and maximize GPU performance, Tier 0 transforms GPU computing infrastructure by improving resource utilization and reducing costs for AI, HPC and other data-intensive workloads.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20241121528020/en/

Hammerspace MLPerf1.0 Benchmark Results on 3D-Unet. Note: Tier 0 tests were run in the open category and were not reviewed by MLCommons. The results will be submitted for review in the next MLCommons review cycle. (Graphic: Business Wire)

Hammerspace MLPerf1.0 Benchmark Results on 3D-Unet. Note: Tier 0 tests were run in the open category and were not reviewed by MLCommons. The results will be submitted for review in the next MLCommons review cycle. (Graphic: Business Wire)

MLCommons® initially released the MLPerf1.0 benchmark in September 2024. Hammerspace has used the benchmark to validate the performance benefits of the newly announced Tier 0 architecture it creates with the NVMe storage within GPU servers. The tests were run on the bandwidth-intensive, 3D-Unet workflow on Supermicro servers using ScaleFlux NVMe drives. Results from the Hammerspace Tier 0 benchmarks were compared with previously submitted benchmarks from other vendors. Additional details can be found in the Hammerspace MLPerf Storage v1.0 Benchmark Results technical brief.

Hammerspace also ran the benchmark on identical hardware as external storage to demonstrate the power of Tier 0 relative to external shared storage using the same Hammerspace software used to run Tier 0. Results show that Tier 0 enabled GPU servers to achieve 32% greater GPU utilization and 28% higher aggregate throughput compared to external storage accessed via 400GbE networking. By leveraging the local NVMe storage inside GPU servers, Tier 0 makes existing deployed NVMe storage available as shared storage and delivers the performance benefits of a major network upgrade — without the cost or disruption of replacing network interfaces or adding infrastructure.

“Our MLPerf1.0 benchmark results are a testament to Hammerspace Tier 0’s ability to unlock the full potential of GPU infrastructure,” said David Flynn, Founder and CEO of Hammerspace. “By eliminating network constraints, scaling performance linearly and delivering unparalleled financial benefits, Tier 0 sets a new standard for AI and HPC workloads.”

Virtually Zero CPU Overhead

Hammerspace leverages the software already built into the Linux kernel for both protocol services and to communicate with the Anvil metadata servers. It uses a tiny fraction of the CPU, leaving the server resources for the tasks they were designed for.

Eliminating Network Bandwidth Constraints

The benchmark demonstrated that network speed is critical to maintaining GPU efficiency. While traditional setups using 2x100GbE interfaces struggled under load, Tier 0 local storage eliminates the network dependency entirely.

Linearly Scalable Performance

Tier 0 achieves linear performance scaling by processing data directly on GPU-local storage, bypassing traditional bottlenecks. Using Hammerspace’s data orchestration, Tier 0 delivers data to local NVMe, protects it and seamlessly offloads checkpointing and computation results.

Extrapolated results from the benchmark confirm that scaling GPU servers with Tier 0 storage multiplies both throughput and GPU utilization linearly, ensuring consistent, predictable performance gains as clusters expand.

CapEx and OpEx Benefits

Tier 0’s ability to integrate GPU-local NVMe into a global shared file system delivers measurable financial and operational benefits:

  • Reduced External Storage Costs: By offsetting the need for high-performance external storage, organizations save on hardware, networking, power, and cooling expenses.
  • Faster Deployment: Hammerspace enables instant utilization of existing NVMe storage, avoiding time-consuming installations and configurations.
  • Enhanced GPU Efficiency: With checkpointing durations reduced from minutes to seconds, Tier 0 unlocks significant compute capacity, accelerating job completion without additional hardware investments.

Learn More:

About Hammerspace

Hammerspace is radically changing how unstructured data is used and preserved. Our Global Data Platform unifies unstructured data across edge, data centers, and clouds. It provides extreme parallel performance for AI, GPUs, and high-speed data analytics, and orchestrates data to any location and any storage system. This eliminates data silos and makes data an instantly accessible resource for users, applications, and compute clusters, no matter where they are located.

Hammerspace and the Hammerspace logo are trademarks of Hammerspace, Inc. All other trademarks used herein are the property of their respective owners.

©2024 Hammerspace, Inc. All rights reserved.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.