DDN AI400X3 Sets New MLPerf Benchmark for AI Infrastructure Performance
DDN, a global provider of AI and data intelligence solutions, has announced that its next-generation AI400X3 storage appliance has achieved significant results in the latest MLPerf Storage v2.0 benchmarks. The AI400X3, powered by DDN’s EXAScaler parallel file system, is designed to accelerate demanding AI workloads at scale, offering high performance density within a compact, energy-efficient design.
This advancement aims to provide large enterprises with faster insights, reduced operational costs, and the ability to scale their AI initiatives confidently, without compromising performance or sustainability.
Sven Oehme, CTO at DDN, emphasized the need for precision-engineered infrastructure to support AI at scale. “AI at scale demands more than brute force—it requires precision-engineered infrastructure that can deliver relentless performance, efficiency, and reliability,” he stated. “With the AI400X3, we’ve achieved exactly that. These MLPerf results prove that DDN can keep pace with—and even outpace—the world’s most advanced GPUs, all within a compact, power-efficient footprint. We’re not just enabling AI—we’re removing the bottlenecks that have held it back.”
The MLPerf Storage benchmark is an industry standard for evaluating how effectively a storage system supports intensive AI workloads. The DDN AI400X3 was tested in both single-node and multi-node configurations, reflecting real-world deployment scenarios from initial setups to large-scale, distributed AI training. Notably, the system achieved these results using a single, compact 2U appliance, showcasing its efficiency and power.
In the MLPerf Storage 2025 submission, the AI400X3 demonstrated impressive capabilities:
In single-node benchmarking, the DDN AI400X3 achieved:
The highest performance density for Cosmoflow and ResNet50 training, effectively supporting the data needs of 52 and 208 simulated NVIDIA H100 GPUs, respectively, using a single 2U appliance.
I/O performance of 30.6 GB/s for reads and 15.3 GB/s for writes, enabling rapid loading and saving of Llama3-8b checkpoints in 3.4 and 7.7 seconds.
In multi-node benchmarking, it achieved:
Over 120 GB/s sustained read throughput for Unet3D H100 training.
Support for up to 640 simulated H100 GPUs on ResNet50.
Up to 135 simulated H100 GPUs on Cosmoflow.
These results represent a two-fold improvement over the previous year’s performance for the AI400X3.
These benchmark results underscore the DDN AI400X3’s capacity for consistent high performance across a wide range of AI workloads, even under demanding, multi-node training conditions. By ensuring that GPUs remain fully utilized through fast and reliable data access, the AI400X3 accelerates model training and facilitates frequent checkpointing without performance degradation. This leads to improved training efficiency, enhanced resilience, and reduced overall infrastructure costs.
With its compact 2U form factor and low power consumption, the AI400X3 is designed to address increasing data center challenges related to space, power, and cooling, making it suitable for organizations seeking to scale AI workloads sustainably.
DDN has a long-standing reputation as a leader in high-performance AI and High-Performance Computing (HPC) infrastructure. Since 2016, NVIDIA has exclusively relied on DDN to power its internal AI clusters, highlighting DDN’s role as a trusted partner in driving scalable AI innovation.
By undergoing the rigorous MLPerf Storage benchmarks, DDN aims to provide enterprises and AI innovators with independently validated data, enabling them to build, train, and deploy AI solutions with greater speed and confidence.