ASUS has announced its participation in the 2024 OCP Global Summit, scheduled from October 15-17 at the San Jose Convention Center, showcasing its cutting-edge AI solutions at booth #A31. With nearly three decades of experience in the server industry, ASUS has a long-standing collaboration with cloud-service providers, which began in 2008. This expertise has enabled ASUS to develop an advanced lineup of AI servers that will be the highlight of its presentation at the summit, reaffirming its commitment to pushing the boundaries of AI and data center technologies.
Advancing AI with the NVIDIA Blackwell Platform
At the forefront of AI innovation, ASUS will showcase its AI solutions built on the powerful NVIDIA® Blackwell platform. These solutions are designed to address the needs of generative AI and elevate data center performance to unprecedented levels.
One of the standout products, the ASUS ESC AI POD, is a revolutionary rack solution featuring the NVIDIA GB200 NVL72 system, which combines 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs within a rack-scale NVIDIA NVLink domain. This configuration enables the system to function as a unified, massive GPU. The ESC AI POD offers both liquid-to-air and liquid-to-liquid cooling options, ensuring optimal efficiency for individual systems and large-scale data centers alike. Specifically engineered to accelerate large language model (LLM) inference, the ESC AI POD delivers real-time performance for resource-intensive applications, such as trillion-parameter models. ASUS enhances this offering with exclusive software solutions, system-verification methods, and remote deployment strategies, further driving AI development and scaling for data centers of any size.
Additionally, ASUS will unveil the ESC NM2-E1, a 2U server built on the NVIDIA GB200 NVL2 platform. This server is optimized for generative AI and high-performance computing (HPC), providing high-bandwidth communication to support AI developers and researchers as they push the boundaries of innovation. It is finely tuned to optimize the full NVIDIA software stack, making it an ideal choice for AI-focused enterprises.
Optimizing AI Performance with NVIDIA MGX Architecture and H200 Tensor Core GPUs
ASUS is also introducing the 8000A-E13P, a 4U server that supports up to eight NVIDIA H200 Tensor Core GPUs. Designed to be fully compliant with the NVIDIA MGX® architecture, this server is engineered for quick deployment in large-scale enterprise AI infrastructures. With a 2-8-5 topology (CPU-GPU-DPU/NIC) and high-bandwidth ConnectX-7 or BlueField-3 SuperNICs, it enhances system performance and boosts east-west traffic across data centers. This server solution is ideal for organizations looking to maximize AI processing capabilities while maintaining efficiency in complex environments.
Join the ASUS 2024 OCP Session on Modular Server Architecture
For those attending the 2024 OCP Global Summit, ASUS will be hosting a special 15-minute session on October 16 from 10:20 to 10:35. During this session, ASUS will discuss the future of server architecture with a focus on the Data Center Modular Hardware System (DC-MHS), developed in collaboration with OCP. The system leverages the NVIDIA MGX modular design to deliver unmatched flexibility, simplified maintenance, and scalable efficiency, transforming modern data center infrastructures.
Also read – Altair Names Cimatron, Part of Sandvik Group, as Global Channel Partner
Join our WhatsApp News Channel for quick updates – FYI9 News WhatsApp Channel