Nvidia enables high-res Omniverse Cloud imagery on Apple Vision Pro


Nvidia showed off its Omniverse digital twin technology, viewed through the new prism of the Apple Vision Pro headset.

Nvidia engineers have enabled the Omniverse Cloud application programming interface (API) to stream interactive, industrial digital twins into the Apple Vision Pro. I did the demo, and I could see the cloud-based imagery streaming at a resolution that was far better than an app running on the Vision Pro can support. I’ll explain that bit of magic in a bit.

This is the kind of application where the Apple Vision Pro — which is pricey for consumers at $3,500 — could shine. Industrial and enterprise companies can afford to equip their teams with such headsets, considering digital twins can save a lot of money.

“Omniverse Cloud is available on the Apple Vision Pro,” said Jensen Huang, CEO of Nvidia, during the keynote at Nvidia GTC 2024. He got a round of applause.

GB Event

GamesBeat Summit Call for Speakers

We’re thrilled to open our call for speakers to our flagship event, GamesBeat Summit 2024 hosted in Los Angeles, where we will explore the theme of “Resilience and Adaption”.

Apply to speak here

The idea of a digital twin is to build a factory in a digital form first, simulating everything in a realistic way so that the design can be iterated and perfected before anyone has to break ground on the physical factory. Once the factory is built, the digital twin can be used to reconfigure the factory quickly. And with sensors capturing data on the factory’s operation, the designers can modify the digital twin so that the simulation is more accurate. This feedback loop can save the enterprise a lot of money.

Announced today at Nvidia GTC, a new software framework built on Omniverse Cloud APIs, or application programming interfaces, lets developers easily send their Universal Scene Description (OpenUSD) industrial scenes from their content creation applications to the Nvidia Graphics Delivery Network (GDN), a global network of graphics-ready data centers that can stream advanced 3D experiences to Apple Vision Pro. It’s the same network that Nvidia created for the GeForce Now cloud gaming network.

Live demo

Omniverse works with the Apple Vision Pro.

Nvidia showed me a demo of the tech, which will debut at GTC. In the demo, I was able to see an interactive, physically accurate digital twin of a car streamed in full fidelity to Apple Vision Pro’s high-resolution displays.

What I was seeing was about 100 billion triangles in the scen, all with ray tracing, global illumination, dynamic lighting, no pre-computation, with interactive action rendered in the cloud. It was like seeing an animation on a big-screen TV, only inside the headset.

The demo showed a car configurator application developed by CGI studio Katana on the Omniverse platform. Nvidia showed me how to toggle through paint and trim options and even enter the vehicle — leveraging the power of spatial computing by blending 3D photorealistic environments with the physical world. I could reach my fingers out at the car model and pinch to shrink it or spread my fingers to expand it to a huge size that was taller than me. I could also shrink the car down to the size of a Matchbox car. Either way, the quality of the image still look pretty amazing.

Normally, the quality of visuals are limited on the Apple Vision Pro because of the limited amount of memory (8GB or 16GB) for applications. But cloud streaming enables Nvidia to send imagery from a cloud data center to the headset, streaming it in as needed, said Rev Lebaredian, vice president of simulation at Nvidia, in an interview with GamesBeat.

“We’ve been working for a while now to essentially bring all of our technologies with RTX and rendering to the Vision Pro,” Lebaredian said. “There are limitations on what you can do on device in terms of the graphics quality. The experience you see is essentially a rendering in the cloud done with Omniverse and the Omniverse RTX render with USD content that is larger and more complex than anything the device itself can handle or actually most normal computers can handle running on the same infrastructure as our GeForce Now cloud gaming service. It’s on GDN and streaming directly to the device.”

For engineers or anyone else who needs to get an accurate picture of 3D images such as a digital twin, this allows them to design from inside the virtual world and iterate faster with a realistic sense of what the digital world of the simulation is really like.

I could go to the environment tab and slide it to change the time of the day and the lighting changed after I reset the tab. There were occasional artifacts, like seeing purple when I looked through a side window. But for the most part it was amazing. I sat on a chair and moved into the driver’s seat inside the car. So I got to see what it would look like being inside the vehicle. I also got to see what it looks like being a toddler on the floor of the car.

I could configure the car for different interior designs and different models. And it recomputed the scene on the fly.

Could I drive it? No. It’s not actually a video game.

Inside a Wistron factory

In my demo, Nvidia also showed me a scene from a portion of a Wistron server factory in Taiwan. The factory showed a set of rails that could transport an Nvidia DGX supercomputer from one part of the factory to another. The reflections, shadows and light all looked pretty good. And I could use the Vision Pro’s pinch-to-zoom feature to zoom in on the details or zoom out as I wished in real time. The factory model used the actual computer-aided design (CAD) models with hundreds of millions of polygons in the image.

“The idea here is that folks that are building factories, and they can plan where their equipment will go ahead of time and make sure that they don’t have any line inefficiencies before they actually put down their major pieces of equipment,” Lebaredian said.

Of course, for me, I don’t think I’ll be making a trip over to Taiwan to check out the factory, and so it’s very interesting to see it that factory in such high quality imagery.

The cloud essentially lets the viewer see around 20 gigabytes of data, while the Apple Vision Pro would normally let someone see just a few gigabytes of data, as the headset has to run on low power and with a limited processor in the device. The demo lets you see 96 gigabytes of RAM per eye, instead of the 16. And the processing power can be around 50 teraflops per eye; the Vision Pro can’t do that kind of processing for rendering imagery. The data for the demo came from an Omniverse partner called Katana, which Nissan used to create promotional materials for its cars.

“We’re really excited to show that we can take the exact same data and stream it to the Vision Pro with no loss and actually an increase in clarity,” Lebaredian said.

Bringing the power of RTX enterprise cloud rendering to spatial computing

The Omniverse Cloud enables better visuals of digital twins on the Apple Vision Pro.

Spatial computing has emerged as a powerful technology for delivering immersive experiences and seamless interactions between people, products, processes and physical spaces. Industrial enterprise use cases require incredibly high-resolution displays and powerful sensors operating at high frame rates to make manufacturing experiences true to reality.

This new Omniverse-based workflow combines Apple Vision Pro groundbreaking high-resolution displays with Nvidia’s powerful RTX cloud rendering to deliver spatial computing experiences with just the device and an internet connection.

This cloud-based approach allows real-time physically based renderings to be streamed seamlessly to Apple Vision Pro, delivering high-fidelity visuals without compromising details of the massive, engineering fidelity datasets.

“The breakthrough ultra-high-resolution displays of Apple Vision Pro, combined with photorealistic rendering of OpenUSD content streamed from Nvidia accelerated computing, unlocks an incredible opportunity for the advancement of immersive experiences,” said Mike Rockwell, vice president of the Vision Products Group at Apple, in a statement. “Spatial computing will redefine how designers and developers build captivating digital content, driving a new era of creativity and engagement.”

“Apple Vision Pro is the first untethered device which allows for enterprise customers to
realize their work without compromise,” said Lebaredian. “We look forward to our customers having access to these amazing tools.”

The workflow also introduces hybrid rendering, a groundbreaking technique that combines local and remote rendering on the device. Users can render fully interactive experiences in a single application from Apple’s native SwiftUI and Reality Kit with the Omniverse RTX Renderer streaming from GDN.

Nvidia GDN, available in over 130 countries, taps Nvidia’s global cloud-to-edge streaming infrastructure to deliver smooth, high-fidelity, interactive experiences. By moving heavy compute tasks to GDN, users can tackle the most demanding rendering use cases, no matter the size or complexity of the dataset.

Enhancing spatial computing workloads across applications

The Omniverse-based workflow showed potential for a wide range of use cases. For example, designers could use the technology to see their 3D data in full fidelity, with no loss in quality or model decimation.

This means designers can interact with trustworthy simulations that look and behave like the real physical product. This also opens new channels and opportunities for e-commerce experiences. In industrial settings, factory planners can view and interact with their full engineering factory datasets, letting them optimize their workflows and identify potential bottlenecks.

For developers and independent software vendors, Nvidia is building the capabilities that would allow them to use the native tools on Apple Vision Pro to seamlessly interact with existing data in their applications.

Originally appeared on: TheSpuzz

Scoophot
Logo