The AceleMax™ POD
Powered by the NVIDIA HGX™ H200, designed for supercharging AI and HPC workloads in a rack scale solution.
NVIDIA HGX H200 Server
The NVIDIA HGX™ H200 based on the Hopper architecture is designed for enterprise HPC and AI workloads. Our Rack Scale Solutions engineered around the NVIDIA H200 Tensor Core GPU significantly enhance memory capacity to 141 gigabytes per GPU, nearly doubling that of the H100. This increase in memory, coupled with enhanced GPU-to-GPU interconnectivity through NVIDIA NVLink technology, optimizes parallel processing and boosts overall system performance.
Category | Description |
---|---|
Processor | Dual 5th Gen Intel Xeon or AMD EPYC 9004 Series CPU |
Memory | 2TB DDR5 |
GPU | NVIDIA HGX H200 (1128 GB HBM3e Total GPU Memory), 900GB/s NVLINK GPU to GPU Interconnect w/ NVSWITCH |
Networking | 8x NVIDIA ConnectX®-7 Single-port 400Gbps/NDR OSFP NICs 2x NVIDIA ConnectX®-7 Dual-port 200Gbps/NDR200 QSFP112 NICs 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage |
Storage | Configurable, up to 10x NVMe U.3 SSD and and Optional M.2 Support | Onboard Networking | Dual 10GBase-T RJ45 LAN, 1x Management LAN |
Power Supply | 6x 3000W Titanium Redundant Power Supplies |
Engineering Expertise
Our team of thermal, electrical, mechanical, and networking engineers are skilled in designing solutions designed to your specific requirements.
Solution Architects
AMAX's solution architects optimize IT configurations for performance, scalability, and industry-specific reliability.
Networking
AMAX designs custom networking topologies to enhance connectivity and performance in AI and HPC environments.
Thermal Management
AMAX implements innovative cooling technologies that boost performance and efficiency in dense computing setups.
Compute Optimization
AMAX ensures maximum performance through benchmarking and testing, aligning hardware and software for AI workloads.
From Design to Deployment
AMAX's approach to AI solutions begins with intelligent design, emphasizing the creation of high-performance computing and network infrastructures tailored to AI applications. We guide each project from concept to deployment, ensuring systems are optimized for both efficiency and future scalability.
AMAX is NVIDIA DGX Elite
Being NVIDIA DGX AI Compute Systems Elite places AMAX among a select group of only 22 partners across North America and Latin America. Our partnership with NVIDIA underscores our commitment to delivering cutting-edge AI computing solutions.
AceleMax™ POD with NVIDIA HGX H200
Customized Scalable Compute Unit Built For Large Language Models.
- Up to 4.5TB of HBM3e GPU Memory per rack
- 5th Gen Intel® Xeon® Scalable processors, supporting 350W TDP
- Direct GPU-to-GPU interconnect via NVLink delivers 900GB/s bandwidth
- A dedicated one-GPU-to-one-NIC topology
- Modular design with reduced cable usage