We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Could AI be taught multiple skills at the same time? Are immersive displays using holography closer to reality than ever? No one can say with any certainty what precisely the future of artificial intelligence (AI) will hold. But one way to get a glimpse is by looking at the research that Nvidia will present at Siggraph 2022, to be held August 8-11.
Nvidia is collaborating with researchers to present 16 papers at Siggraph 2022, spanning multiple research topics that impact the intersection of graphics and AI technologies.
One paper details innovation with reinforcement learning models, done by researchers from the University of Toronto and UC Berkeley, that could help to teach AI multiple skills at the same time.
Another delves into new techniques to help build large-scale virtual worlds with instant neural graphics primitives. Stepping closer to technologies only seen in science fiction, there is also research on holography that could one day pave the way for new display technology that will enable immersive telepresence.
“Our goal is to do work that’s going to impact the company,” David Luebke, vice president of graphics research at Nvidia, told VentureBeat. “It’s about solving problems where people don’t already know the answer and there is no easy engineering solution, so you have to do research.”
The intersection of research and enterprise AI
The 16 papers that Nvidia is helping to present focus on innovations that impact graphics, which is what the Siggraph show is all about. Luebke noted, however, that nearly all the research is also relevant for AI use outside the graphics field.
“I think of graphics as one of the hardest and most interesting applications of computation,” Luebke said. “So it’s no surprise that AI is revolutionizing graphics and graphics is providing a real showcase for AI.”
Luebke said that the researchers who worked on the reinforcement learning model paper actually view themselves as more in the robotics field than graphics. The model has potential applicability to robots as well as any other AI that needs to learn how to perform multiple actions.
“The thing about graphics is that it’s really, really hard and it’s really, really compelling,” he said. “Siggraph is a place where we showcase our graphics accomplishments, but almost everything we do there is applicable in a broader context as well.”
Computational holography and the future of telepresence
Throughout the COVID-19 pandemic, individuals and organizations around the world suddenly become a lot more familiar with video conferencing technologies like Zoom. There has also been a growing use of virtual reality headset usage, connecting to the emerging concept of the metaverse. The metaverse and telepresence could well one day become significantly more immersive.
One of the papers being presented by Nvidia at Siggraph has to do with a concept known as computational holography. Luebke explained that at a basic level, computational holography is a technique that can construct a three-dimensional scene, where the human eye can focus anywhere within that scene and see the correct thing as if it were really there. The research being presented at Siggraph details some new approaches to computational holography that could one day lead to VR headsets that are dramatically thinner than current options, providing a more immersive and lifelike experience.
“That has been kind of a holy grail for computer graphics for years and years,” Luebke said about the work on computational holography. “This research is showing that you can use computation, including neural networks and AI, to improve the quality of holographic displays that work and look good.”
Looking beyond just the papers being presented at Siggraph, Luebke said that Nvidia research is really interested in telepresence innovations.