Join executives from July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!
Nvidia is updating its AI Enterprise software suite today to version 2.1, providing users with new commercially-supported tools to help run artificial intelligence (AI) and machine learning (ML) workloads for enterprise use cases.
Nvidia AI Enterprise first became generally available in August 2021 as a collection of supported AI and ML tools that run well on Nvidia’s hardware. In the new release, a core component of the software suite is an updated set of supported versions of popular open-source tools, including PyTorch and TensorFlow. The new Nvidia Tao 22.05 low-code and no-code toolkit for computer vision and speech applications is also included, as is the 22.04 update for Nvidia’s Rapids open-source libraries for running data science pipelines on GPUs.
“Over the last couple of years, what we’ve seen is the growth of AI being used to solve a bunch of problems and it is really driving automation to improve operational efficiency,” Justin Boitano, VP of enterprise and edge computing at Nvidia. “Ultimately, as more organizations get AI into a production state, a lot of companies will need commercial support on the software stack that has traditionally just been open source.”
Bringing enterprise support to open-source AI
A common approach with open-source software is to have what is known as an “upstream” community, where the leading edge of development occurs in an open approach. Vendors like Nvidia can and do contribute code upstream, and then provide commercially supported offerings like Nvidia AI Enterprise, in what is referred to as the “downstream.”
“When we talk about popular AI projects like TensorFlow, our goal is absolutely to commit as much as possible back into the upstream,” Boitano said.
With Nvidia AI Enterprise, the open-source components also benefit from integration testing across different frameworks and on multiple types of hardware configurations to help ensure that the software works as expected.
“It’s very similar to the early Linux days, where there are those companies that are totally happy running with the open-source frameworks and then there’s another part of the community that really feels more comfortable having that direct engagement,” Boitano said.
Enterprise support and cloud-native deployment options for AI
Another key element of enterprise support is making it easier to actually deploy different AI tools in the cloud. Installing and configuring AI tools is often a complicated challenge for the uninitiated.
Among the most popular approaches to cloud deployment today is the use of containers and Kubernetes in a cloud-native model. Boitano explained that Nvidia AI Enterprise is available as a collection of containers. There is also a Helm chart, which is an application manifest for Kubernetes deployment, to help automate the installation and configuration of the AI tools in the cloud.
An even easier approach is provided by Nvidia LaunchPad labs, which is a hosted service on Nvidia infrastructure for trying out the tools and frameworks that are supported by the Enterprise AI software suite.
The TAO of Nvidia
Making it easier to build models for computer vision and speech recognition use cases is a key goal of Nvidia’s TAO toolkit, which is part of the Nvidia Enterprise AI 2.1 update.
Boitano explained that TAO provides a low-code model for organizations to take an existing pretrained model and tune it to a user’s own specific environment and data. One particular example of where TAO can help is with computer vision applications in factories.
Lighting conditions can be variable in different factories, creating glare on cameras that can impact recognition. The ability to relabel an amount of data inside a specific environment, where the light might be different from the pretrained model, can help improve accuracy.
“TAO provides a lightweight way to retrain models for new deployments,” Boitano said.
Looking forward to future Nvidia AI Enterprise releases, Boitano said that the plan is to continue making it easier for organizations to use different toolkits for deploying AI and ML workflows in production.