Santa Clara, California-based company, Advanced Micro Devices (AMD), says its “Milan-X” AMD EPYC processors with 3D V-Cache, to launch in early 2022, will deliver a “50% average uplift” to technical computing workloads.
It also said its Instinct MI200 GPUs, to also launch in early 2022, will boost high-performance computing (HPC) and AI. AMD made the announcements today at its Accelerated Data Center Premier virtual event.
HPC is one area where AMD has bragging rights, given that its designs were chosen for Oak Ridge National Laboratory’s Frontier supercomputer, one of the first exascale systems capable of exceeding a quintillion, or 1018, calculations per second. Frontier pairs Cray’s new Shasta architecture and Slingshot interconnect with AMD EPYC and Instinct processors assembled with 4 GPUs to 1 CPU in each node, according to the project website. Currently, under construction, Frontier is scheduled to be available to scientists in early next year.
“We’re bringing the CPUs, GPUs, and software together into a unified system architecture to power exascale computing,” Ram Peddibhotla, AMD corporate vice president, product management, said in a preview briefing for journalists.
While few businesses today aspire to exabyte performance, those with technical computing workloads like electronics design, structural analysis, computational fluid dynamics, and finite element analysis techniques used in engineering simulations will benefit from improvements to EPYC, according to AMD. For example, EPYC shows a 66% performance improvement for RTL verification, a critical process in electronic design automation.
“Verification proves that each structure and the design does what it’s supposed to do,” Peddibhotla explained. “It helps catch defects early in the process before a chip is baked into silicon.” Designers taking advantage of this improvement will get the choice of finishing verification faster and getting to market faster or packing more tests into the same amount of time to improve quality, he said.
AMD says EPYC benefits from continued improvements in its 3D chiplet manufacturing process and boosting the amount of L3 cache per complex core (CCD) from 32 to 96 megabytes. In an 8-CCD module that includes other types of cache, the total is “804 megabytes of cache per socket at the top of the stack — an incredible amount of cache,” Peddibhotla said. That means the processor can manage more information internally, without relying on other server memory or storage.
AMD says its latest GPU for datacenters will perform 9.5 times faster for high-performance computing (HPC) and 1.2 times faster for AI workloads than competing GPUs — like those from Nvidia. The Instinct MI200 is the latest in a line of GPUs specifically designed for datacenters, as opposed to gaming and desktop graphics. For this update, AMD particularly focused on improving performance for double-precision floating-point operations, which is why the performance improvements claimed are bigger for HPC than for AI processing. “We targeted this device to do really, really well on the toughest scientific problems requiring double-precision math, and that’s where we made the biggest step forward,” said Brad McCreadie, corporate VP of datacenter GPU accelerators at AMD.
The performance improvement varies between types of HPC workloads, for example, McCreadie said the Instinct MI200 performs 2.5 times faster for the types of vector operations used for vaccine simulations.
More targeted toward AI developers is the release of the ROCm 5.0 open source software for GPU computing, which integrates with popular frameworks such as Pytorch and TensorFlow, and the launch of the Infinity Hub collection of code and templated containers to help developers get started.
AMD also announced the third generation of its Infinity architecture for interconnecting CPUs and GPUs, which it says can deliver up to 80Gbps of total aggregate bandwidth to reduce data movement and simplify memory management.
Despite fierce competition with Nvidia, Intel, and others, AMD reported 55% revenue growth in the most recent quarter.