AMD announced the new AMD Instinct™ MI200 series accelerators, the first exascale-class GPU accelerators. AMD Instinct MI200 series accelerators include high-performance computing (HPC). HPC is the ability to process data and perform complex calculations at high speeds. One of the best-known types of HPC solutions is the supercomputer. The accelerators also include an artificial intelligence (AI) accelerator,1 the AMD Instinct™ MI250X.

Built on AMD CDNA™ 2 architecture, AMD Instinct MI200 series accelerators deliver application performance for a broad set of HPC workloads. Forrest Norrod, senior vice president and general manager, Data Center and Embedded Solutions Business Group at AMD, commented:

“AMD Instinct MI200 accelerators deliver leadership HPC and AI performance, helping scientists make generational leaps in research that can dramatically shorten the time between initial hypothesis and discovery. With key innovations in architecture, packaging and system design, the AMD Instinct MI200 series accelerators are the most advanced data center GPUs ever, providing exceptional performance for supercomputers and data centers to solve the world’s most complex problems.”

Exascale With AMD

AMD designed the Frontier supercomputer in collaboration with the U.S. Department of Energy, Oak Ridge National Laboratory, and HPE. The supercomputer is expected to deliver more than 1.5 exaflops of peak computing power. Frontier is created to push the boundaries of scientific discovery by enhancing AI performance, analytics, and simulation at scale. This would help scientists to pack in more calculations, identify new patterns in data, and develop innovative data analysis methods to accelerate the pace of scientific discovery. Thomas Zacharia, director at Oak Ridge National Laboratory, explained:

“The Frontier supercomputer is the culmination of a strong collaboration between AMD, HPE and the U.S. Department of Energy, to provide an exascale-capable system that pushes the boundaries of scientific discovery by dramatically enhancing performance of artificial intelligence, analytics, and simulation at scale.”

Powering The Future of HPC

The AMD Instinct MI200 series accelerators, combined with 3rd Gen AMD EPYC CPUs and the ROCm™ 5.0 open software platform, are designed to propel new discoveries for the exascale era and tackle our most pressing challenges from climate change to vaccine research.

Key capabilities and features of the AMD Instinct MI200 series accelerators 

AMD CDNA™ 2 architecture presents second Gen Matrix Cores accelerating FP64 and FP32 matrix operations, delivering up to 4X the peak theoretical FP64 performance compared to AMD previous-gen GPUs.

Leadership Packaging Technology is a multi-die GPU design with 2.5D Elevated Fanout Bridge (EFB) technology that delivers 1.8X more cores and 2.7X higher memory bandwidth compared to AMD previous-gen GPUs. It offers aggregate peak theoretical memory bandwidth at 3.2 terabytes per second.

3rd Gen AMD Infinity Fabric™ technology offers up to 8 Infinity Fabric links that connect the AMD Instinct MI200 with 3rd Gen EPYC CPUs and other GPUs in the node to enable unified CPU/GPU memory coherency and maximize system throughput. This allows for an easier on-ramp for CPU codes to tap the power of accelerators.

Software for Enabling Exascale Science

AMD ROCm is an open software platform allowing researchers to tap the power of AMD Instinct accelerators to drive scientific discoveries. The ROCm platform is built on the foundation of open portability, supporting environments across multiple accelerator vendors and architectures. With ROCm 5.0, AMD extends its open platform powering top HPC and AI applications with AMD Instinct MI200 series accelerators, increasing accessibility of ROCm for developers and delivering leadership performance across key workloads.

Through the AMD Infinity Hub, researchers, data scientists, and end-users can find and install containerized HPC apps and ML frameworks that are optimized and supported on AMD Instinct accelerators and ROCm. The hub currently offers a range of containers supporting Radeon Instinct MI50, AMD Instinct™ MI100, or AMD Instinct MI200 accelerators. They include several applications like Chroma, CP2k, LAMMPS, NAMD, OpenMM, and more, along with ML frameworks, such as TensorFlow and PyTorch. New containers are continually being added to the hub.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR