The world’s fastest and most powerful supercomputers are capable of doing many things, but increasingly the world of high performance computing (HPC) is leaning in on artificial intelligence (AI).
At the International Supercomputing Conference (ISC) 2022, which ran from May 29 to June 2 in Hamburg, Germany, vendors announced new hardware and software systems for the world’s fastest supercomputers.
Among the big announcements, AMD revealed that its silicon now powers the most powerful supercomputer ever built with the Frontier system, which is being built by Hewlett Packard Enterprise and will be deployed at the Oak Ridge National Laboratory in Tennessee. Not to be outdone, Intel announced its silicon efforts that will enable future HPC systems, including the Sapphire Rapids CPU and upcoming Rialto Bridge GPU technologies.
Nvidia used ISC 2022 as the venue to announce that its Grace Hopper superchip will be powering the Venado supercomputer at Los Alamos National Laboratory. Nvidia also detailed multiple case studies of how its HPC innovations are being used to help enable AI for nuclear fusion and brain health research. HPC is not just about the world’s fastest supercomputers either. Linux vendor Red Hat announced that it is working with the U.S. Department of Energy to help bridge the gap between cloud environments and HPC.
In terms of the intersection of HPC and artificial intelligence/machine learning (AI/ML), it’s an area that the ISC conference is likely to continue to highlight for years to come.
“Obviously AI/ML will continue to play an expanded role in HPC, but not all AI/ML is HPC or even HPC relevant,” John Shalf, program chair for ISC, told VentureBeat.”We really want to drill down on the AI/ML applications and implementations that directly impact science and engineering applications in both industry and academia.”
Intel seeing increasing role for HPC and AI workloads
For Intel, the intersection of HPC and AI is relatively clear.
Anil Nanduri, vice president for strategy and market initiatives at Intel’s Super Compute Group, explained to VentureBeat that HPC workloads are uniquely demanding, requiring powerful clusters of computing power and typically used for scientific computing. He added that most of the top 500 supercomputers are great examples for high performance applications where the scientific community researches new drug discoveries and material sciences, and runs climate change models, simulations for manufacturing, complex fluid dynamic models and more.
“Just like these traditional HPC workloads, AI/ML workloads are becoming increasingly complex with greater computing requirements,” Nanduri said. “There are large-scale AI models that are running on data center infrastructures which need similar computing performance as some of the leading HPC clusters.”
Nanduri sees continued demand and potential for HPC-powered AI as it can help improve performance and increase productivity.
“As AI workloads scale with gigantic datasets that require HPC-level analysis, we’ll see more AI in HPC, and more HPC computing requirements in AI,” Nanduri added.
How AI makes HPC more powerful
One of the big announcements at last week’s ISC was the unveiling of the Frontier system, which has been crowned as the world’s fastest supercomputer.
According to Yan Fisher, global evangelist for emerging technologies at Red Hat, applying AI/ML will take the computational power of supercomputers to a whole new level. As an example, the primary benchmark metric used in the top 500 supercomputer list is FLOPS (floating point operations per second). Fisher explained that FLOPS is designed to express the capabilities of any supercomputer to perform floating point calculations with a very high precision. These complex calculations take time and a lot of processing power to complete.
“In contrast, the use of AI helps to achieve results much faster by performing calculations using lower precision and then evaluating the outcome to narrow down the answer with a high degree of accuracy,” Fisher told VentureBeat. “The Frontier system, using the HPL-AI benchmark, has demonstrated capabilities to perform over six times more AI-focused calculations per second than traditional floating point calculations, significantly expanding computational capabilities of that system.”
From HPC supercomputers to enterprise-level improvements in AI
HPC powers big systems, but what is the impact of AI innovations for supercomputers on enterprise users? Fisher noted that enterprises are adopting AI/ML as they are undergoing digital transformation.
What’s more interesting in his view is that once enterprises have figured out how to deploy and benefit from AI/ML, the demand for AI/ML infrastructure begins to rise. That demand drives the next phase of adoption — the ability to scale.
“This is where HPC has historically been ahead of the pack, splitting large problems into smaller chunks and running them in parallel or simply in a more optimal way,” Fisher said.
On the other hand, Fisher commented that in the HPC space, the use of containers is not as common and if they are present, they are not the traditional application containers that we see in enterprise and cloud deployments. That is one of the reasons why Red Hat is collaborating with the Department of Energy National Labs, as their IT infrastructure teams are looking to better support their scientists with modern infrastructure tools.
At Intel, Nanduri said he’s seeing growing demand for compute acceleration across general purpose computing, HPC and AI workloads. Nanduri noted that Intel is planning to deliver a diverse portfolio of heterogeneous architecture paired with software and systems.
“These architectures, software and systems will allow us to improve performance by orders of magnitude, while reducing power demands across HPC and general-purpose AI/ML workloads,” Nanduri said. “The beauty of the Cambrian explosion in AI is that all the innovations driven by the need for scalable compute will enable enterprises to leap forward without needing to invest in long research cycles.”