AMAX Rack Scale Solution: NVIDIA HGX H200 POD
AMAX Rack Scale Solutions are engineered around the NVIDIA H200 Tensor Core GPU, which significantly enhances memory capacity to 141 gigabytes per GPU, nearly doubling its predecessor, the H100. This increase in memory, coupled with enhanced GPU-to-GPU interconnectivity through NVIDIA NVLink technology, optimizes parallel processing and boosts overall system performance, making it ideal for the most demanding AI workloads, including large language models and intricate scientific simulations.
In addition, NVIDIA H200 unlocks new capabilities for generative AI and HPC with industry leading performance and memory capabilities. As the first GPU released in the market with next generation HBM3e, H200’s faster, larger memory accelerates deployment of generative AI and LLMs while providing incredible capabilities for HPC workloads. A single 8x HGX H200 system provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the maximum performance powering Large Language Models (LLMs), Retrieval Augmented Generation (RAG), and other generative AI use cases.
Download Datasheet