ACE POWERWORKS MATRIX GZ2

Front view of a rack-mounted server with multiple drive bays, cooling fans, and indicator lights visible.

H: 3.4″ (87.5 mm) x W: 17.6″ (448 mm) x D: 33.5″ (850 mm)

ACE POWERWORKS MATRIX GZ2

Model: PM-D2GSG2

The Powerworks Matrix GZ2 is a purpose-built 2U GPU server designed to meet the demands of modern AI development, machine learning, and scientific computing. Powered by a single AMD EPYC 9555P processor and scalable up to eight NVIDIA L40S GPUs, it provides a flexible foundation for AI model training, inference, and high-throughput data processing in a dense, rack-friendly form factor.

Scalable AI Training

Designed to scale with your models—from prototyping to production. Supports multi-GPU parallelism and AI training pipelines with top-tier throughput and power efficiency.

Deep Learning & Neural Network Acceleration

Ideal for NLP, computer vision, reinforcement learning, and transformer models. The L40S GPUs offer powerful tensor performance and memory capacity for cutting-edge model development.

Cloud AI & AIaaS Providers

A perfect fit for service providers offering AI model hosting, fine-tuning, or inference-on-demand across GPU-accelerated workloads.

Train Faster. Scale Smarter. L40S Inside.

2U of Scalable AI Power for Training & Inference

The Enterprise AI GPU Server Built for Growth

Featuring a single AMD EPYC 9555P processor and support for up to eight NVIDIA L40S GPUs, the Matrix GZ2 delivers immense compute power for training, inference, and cloud-based AI services—ideal for research institutions, AI startups, and cloud infrastructure providers.

Key Features

  • AMD EPYC 9555P Processor

    64 high-efficiency cores designed for high-thread-count AI pipelines, virtualization, and low-latency data processing.

  • 192GB DDR5-5600 ECC Memory

    High-speed, error-correcting memory to support large model datasets, AI training batches, and memory-bound applications.

  • Up to 8x NVIDIA L40S GPUs

    PCIe Gen5 connectivity enables massive compute parallelism. Perfect for training models in-house or offering AI-as-a-service.

  • 4x NVIDIA B200 SXM5 GPUs

    Each GPU provides 96GB HBM3e with industry-leading memory bandwidth—ideal for large models and multimodal AI systems.

  • 2x 3000W Titanium PSUs

    Redundant, high-efficiency power supplies ensure maximum uptime and stability, even under full GPU load.

Front view of a rack-mounted server with multiple drive bays, cooling fans, and indicator lights visible.

SPECIFICATIONS

Form Factor

Micro-Tower

Height: 14.5″ (368.3mm)

Width: 7.5″ (190.5mm)

Depth: 15.5″ (394mm)

Intel Core i5 14600K Processor
– 14 Cores, 20 Threads
– 3.5 GHz up to 5.3 GHz
– 24 MB Cache
RTX A2000 6GB
Intel B760 Chipset
 
Front I/O: 
– 1 x USB 2.0 Port
– 2 x USB 3.0 Ports
– 1 x Type-C Port
– 1 x Headphone Jack
– 1 x Microphone Jack
– 1 x Reset Button
– 1 x Power Button
 
Rear I/O:
– 2 x Antenna Mounting Points
– 1 x PS/2 Mouse/Keyboard Port
– 2 x DisplayPort
– 1 x USB 3.2 Gen1 Type-C Port
– 3 x USB 3.2 Gen1 Type-A Ports
– 2 x USB 2.0 Ports
– 1 x RJ-45 LAN Port
– HD Audio Jacks: Line in / Front Speaker / Microphone
 
Connectors:
– 1 x Chassis Intrusion and Speaker Header
– 1 x RGB LED Header
– 3 x Addressable LED Headers
– 1 x CPU Fan Connector (4-pin)
– 1 x CPU/Water Pump Fan Connector (4-pin) (Smart Fan Speed Control)
– 4 x Chassis/Water Pump Fan Connectors (4-pin) (Smart Fan Speed Control)
– 1 x 24 pin ATX Power Connector
– 1 x 8 pin 12V Power Connector (Hi-Density Power Connector)
– 1 x Front Panel Audio Connector
– 1 x Thunderbolt AIC Connector (5-pin) (Supports ASRock Thunderbolt 4 AIC Card)
– 2 x USB 2.0 Headers (Support 4 USB 2.0 ports)
– 1 x USB 3.2 Gen1 Header (Supports 2 USB 3.2 Gen1 ports)
– 1 x Front Panel Type C USB 3.2 Gen1 Header
2TB NVMe M.2
32GB DDR5 Memory
500W 80+ Gold