May 7, 2024 1 min read

AceleMax™ POD with NVIDIA HGX H200

AceleMax™ POD with NVIDIA HGX H200
Table of Contents

Our Acelemax Pod based on NVIDIA HGX™ H200 are optimized for enterprise HPC and AI tasks. The rack scale design utilizes the H200 Tensor Core GPU increasing memory capacity to 141 gigabytes per GPU, nearly twice that of the NVIDIA H100.

AceleMax™ POD

4x 8-GPU NVIDIA HGX H200 per rack

  • Up to 4.5TB of HBM3e GPU Memory per rack
  • 5th Gen Intel® Xeon® Scalable processors, supporting 350W TDP
  • Direct GPU-to-GPU interconnect via NVLink delivers 900GB/s bandwidth
  • A dedicated one-GPU-to-one-NIC topology
  • Modular design with reduced cable usage
Datasheet

NVIDIA HGX Specifications

Category Description
Processor Dual 5th Gen Intel Xeon or AMD EPYC 9004 Series CPU
Memory 2TB DDR5
GPU NVIDIA HGX H200 (1128 GB HBM3e Total GPU Memory), 900GB/s NVLINK GPU to GPU Interconnect w/ NVSWITCH
Networking 8x NVIDIA ConnectX®-7 Single-port 400Gbps/NDR OSFP NICs 2x NVIDIA ConnectX®-7 Dual-port 200Gbps/NDR200 QSFP112 NICs 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage
Storage Configurable, up to 10x NVMe U.3 SSD and Optional M.2 Support
Onboard Networking Dual 10GBase-T RJ45 LAN, 1x Management LAN
Power Supply 6x 3000W Titanium Redundant Power Supplies