How Unity Simulation Pro and Unity SystemGraph use AI to train systems

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Last week, at its AI Summit, Unity, a platform that aims to enable users to produce real-time 3D content, announced the launch of two new products designed to simplify training complex systems with AI: Unity Simulation Pro, a headless multi-GPU distributed rendering solution, and Unity SystemGraph, a node-based editor extension.

The two products are designed to make it easier for engineers to test and analyze the capabilities of AI systems virtually — that is, without having access to physical hardware.

For instance, Unity Simulation Pro is built to enable developers to use distributed rendering to model and test systems faster than real-time independent of physical hardware, so they can iterate and test at a much higher rate.

“Unity Simulation Pro is purpose=built for building cutting-edge simulation applications. With this product we are enabling a future where we’ll see more developers create and evolve autonomous systems, across different industries, at a quicker, safer, and more cost-effective rate,” said Danny Lange, senior vice president of artificial intelligence at Unity.

Training intelligent robots with AI

The fast, non-hardware-dependent capabilities of Unity Simulation Pro are a key reason why the Allen Institute of AI and Carnegie Mellon University use the solution to help train robots to perform navigation and manipulation tasks.

By using the solution, the researchers managed to accelerate the training process from 200 frames per second (FPS) with one GPU to 5000 FPS with 32 GPUs, according to the university.

“Al2-THOR is a pioneering simulation environment with the most diverse repository of indoor scenes and while it provides highly realistic simulations, this high fidelity is computationally intensive,” said Abhinav `Gupta, associated professor, Carnegie Mellon University. “The headless version of AI2-THOR, which is built on Unity Simulation Pro, enables us to train our models in large clusters. Experiments that used to take weeks to finish, can now finish in just a few days.”

Traditionally, if engineers wanted to test autonomous systems, they had to develop complex simulations with large environments. Unity Simulation Pro provides a faster testing solution that enables developers to create and test autonomous systems at a faster pace.

Node-based editing

Similarly, Unity SystemGraph, is a node-based editor, designed to emulate mechatronics, robotics, and photo sensor systems. Unity designed the tool to reduce the cost of the testing process to enable engineers to mimic sensors, cameras, and physical robots virtually.

“With Unity SystemGraph, engineers can mimic sensors, cameras, and even physical robots in a complex system. Then they can test and train these systems at faster than real-time rates with Unity Simulation Pro, achieving optimized performance levels at tremendous cost and time savings,” said Dave Rhodes, general manager, Digital Twins, Unity.

In the past, according to Unity, Volvo Cars used Unity SystemGraph to perform high-fidelity sensor modeling and to run autonomous driving perception testing, to test the autonomous-driving capabilities of its AI systems.

Ultra-fast rendering

While Unity isn’t the only development platform on the market, competing with other providers like Roblox, Chartboost, and Mobvista, what sets Unity Simulation Pro apart is its use of distributed rendering. Distributed rendering optimizes the amount of GPU resources available to users, so they can render simulations faster than other less-optimized platforms.


Originally appeared on: TheSpuzz

Scoophot
Logo