Nvidia details plans to transform data centers into AI factories

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The road to more powerful AI-enabled data centers and supercomputers is going to be paved with more powerful silicon. And if Nvidia has its way, much of the silicon innovation will be technology it has developed.

At the Computex computer hardware show in Taipei today, Nvidia announced a series of hardware milestones and new innovations to help advance the company’s aspirations. A key theme is enhancing performance to enable more data intelligence and artificial intelligence for use cases.

“AI is transforming every industry by infusing intelligence into every customer engagement,” Paresh Kharya, senior director of product management at Nvidia, said during a media briefing. “Data centers are transforming into AI factories.”

Grace superchip is building block of AI factory

One of the key technologies that will help enable Nvidia’s vision is the company’s Grace superchips. At Computex, Nvidia announced that multiple hardware vendors, including ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro and Wiwynn, will build Grace base systems that will begin to ship in the first half of 2023. Nvidia first announced the Grace central processing unit (CPU) superchip in 2021 as an ARM-based architecture for AI and high-performance computing workloads.

Kharya said the Grace superchip will be available in a number of different configurations: One option is a two-chip architecture, which is connected with Nvidia’s NVLink interconnect. That configuration will enable up to 144 ARM v9 compute cores. The second approach is known as the Grace Hopper Superchip, which combines the Grace CPU with an Nvidia Hopper GPU.

“Grace Hopper is built to accelerate the largest AI, HPC, cloud and hyperscale workloads,” Kharya said.

Credit: Nvidia

New 2U reference architecture design

As part of its Computex announcements, Nvidia also detailed 2U (2 rack unit) sized server architecture designed to help enable adoption into data centers.

One of the reference designs is the CGX, which is intended to help accelerate cloud graphics and gaming use cases. The CGX includes the Grace superchip alongside Nvidia a16 GPUs and BlueField-3 data processing units (DPUs). Another reference design is the new OVX system, which is intended for enabling AI digital twin and Nvidia Omniverse workloads. OVX also uses the Grace Superchip and BlueField-3, while providing vendors with the option of using a range of different Nvidia GPUs. Finally, the HGX Grace and the HGX Grace Hopper 2U reference designs provide systems optimized for AI training and inference. 

Nvidia also announced new liquid-cooled GPUs, beginning with the A100. Kharya described the approach as the first data center PCIe GPU using direct-to-chip liquid-cooling technology. The new PCIe GPUs for direct-to-chip liquid cooling will ship starting in Q3 this year.

“Using this technology results in up to 30% lower power consumption,” he said.

More partners for Nvidia AI Enterprise 

Nvidia is also using its time at Computex to bring in more industry go-to-market partners in APAC for its Nvidia AI Enterprise software suite, which helps organizations build and support end-to-end data science workflows. The software first became generally available in August 2021. Among the new APAC partners are ADG, BayNex, Leadteck and ZeroOne.

“Solving challenges with AI requires a full-stack solution. At the base of our platform are the infrastructure components that are needed to build the AI factories, including our CPU, GPU and DPU,” Kharya said. “On top of that is our software stack that operates these AI factories and runs them optimally. “


Originally appeared on: TheSpuzz

Scoophot
Logo