7″ x 17.2″ x 29″ (178 x 437 x 737mm)
Supports up to 2x Intel® Xeon® 64-core CPUs, 128 threads, and 8TB DDR5 ECC memory, delivering peak performance for AI/ML, simulation, and high-throughput workloads.
Houses up to 10 double-width NVIDIA GPUs with PCIe 5.0 and optional NVLink® for accelerated parallel processing—ideal for deep learning and inference at scale.
Equipped with 13 PCIe Gen 5.0 slots, hot-swappable fans and drives, and 4x 2700W redundant Titanium PSUs—ensuring uptime, scalability, and ease of service in mission-critical environments.
Harness the performance of 5th and 4th Gen Intel® Xeon® Scalable processors, supporting up to 64 cores and 128 threads per socket with up to 320MB cache. Capable of handling up to 385W TDP per CPU (liquid-cooled), this system is built for the most demanding compute tasks including AI training, simulations, and real-time analytics.
Outfitted with 32 DDR5 DIMM slots, the server supports up to 8TB of ECC RDIMM memory. With speeds of up to 5600MT/s (1DPC), it ensures blazing-fast access to data and unmatched bandwidth for memory-intensive workloads like in-memory databases and large-scale model training.
Supports up to 10 double-width GPUs via 13 PCIe Gen 5.0 x16 slots, providing extreme compute acceleration for deep learning, 3D rendering, and scientific computing. With dual-root PCIe switching, optional NVIDIA NVLink® or Intel Xe Link, and optimized airflow, it delivers industry-leading GPU density and interconnect performance.
Includes 4x 2700W Titanium-level redundant (2+2) power supplies for ultra-reliable high-efficiency power delivery. Paired with 8 hot-swap heavy-duty fans and support for liquid cooling, the system is engineered to maintain thermal stability even under maximum load.
2U Rackmount
27.56″ x 17.26″ x 3.42″ (700 x 438.5 x 87mm)