Runway launches new ‘Watch’ feature as CEO says Hollywood AI discourse ‘needs to be more nuanced’ 

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


As Hollywood strikes by actors and writers continue, with the impact of generative AI on their industry and jobs a central concern, Runway CEO Cristóbal Valenzuela knows his generative AI video startup — most recently valued at $1.5 billion — is under fire from those on the picket line. 

But when I visited the company’s surprisingly spartan Manhattan headquarters last week, Valenzuela told me that while he doesn’t want to dismiss the concerns of writers and actors around their likenesses being generated by AI, or their film-industry jobs being replaced by AI, he believes the conversation around Hollywood and AI “needs to be more nuanced.”

“I empathize with the artistic community who might feel threatened or who might have questions,” he said. “At the same time, when you speak with the creators or filmmakers, you start understanding that it’s different from a singular point of view that this is going to replace everything, because it’s not — it’s going to augment a lot of other things as well.”

Hollywood’s pushback on AI hasn’t kept the New York City-based company from its efforts to build a community of artists and filmmakers and to support and promote their AI-generated output. In March, Runway held its first annual AI Film Festival, and today it launched a new feature on its website and iOS app called Watch — which allows users to share and consume longer-form videos created with Runway tools. 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 

Register Now

Runway “Watch” feature

“A lot of what we’re working towards is both democratizing and making these tools more convenient, but also showcasing the stories being made with those tools,” Valenzuela said. “We really need to highlight the great and positive outcomes with technology. One of those efforts is by showcasing them in the Watch section.” 

Runway founders bonded over digital art

Runway’s offices are located in an unpretentious Tribeca building just a block below noisy Canal Street, abutting a graffiti-filled alleyway. Upon entering, there are no immediate physical clues, that the office is in fact the home base of one of the industry’s hottest gen AI startups, which drew a fresh infusion of $141 million last month from Google, Nvidia and Salesforce, among other investors. 

Other than a few art posters and a shelf filled with books about design, I was also surprised that the Runway offices don’t show off much evidence of the company’s artistic bona fides.   

Originally from Chile, Valenzuela earned a bachelor’s degree in economics and business management, and then a master’s degree in arts and design in 2012. In 2018, he became a researcher at New York University’s Tisch School of the Arts’ Interactive Telecommunications Program (ITP), which is sometimes described as an art school for engineers — or an engineering school for artists. 

That year, Valenzuela also founded Runway with Tisch colleagues Anastasis Germanidis and Alejandro Matamala Ortiz after the trio bonded over a mutual interest in using digital tools for design. Today, in addition to its initial text-to-video generative AI offering, Runway provides image-to-video, video-to-video, 3D texture, video editing and AI training options.  

Early text-to-video typewriter foreshadowed generative AI

While Valenzuela said he has always experimented with artistic mediums and techniques, the things he has exhibited have been digital art. One early interactive art project called “Regression,” exhibited at a museum in Chile in 2012, makes it crystal clear that the concept of text-to-video has been on his mind for over a decade.

“It was an old typewriter from my grandpa,” he said. “I connected and built a network of the keystrokes of the typewriter. Imagine a pedestal with a typewriter and a set of white walls. Every keystroke was connected to one another and went to computer software I wrote so that every time you wrote, videos were projected — you were typing words in a physical device and everything you were typing was being recorded in this infinite piece of paper.”

The videos were not generated back then, of course, but rather pre-existing videos Valenzuela assembled. “But that was the type of thing that was interesting,” he explained. These days, he says he doesn’t practice making much traditional art: “My art right now is building Runway.”

Screen Shot 2023 08 17 at 10.48.02 AM
“Regression” by Cristóbal Valenzuela (2012)

‘The type of creative outputs we’re trying to provoke’

In June, “Genesis,” a cinematic, 45-second-long sci-fi movie trailer posted by Nicolas Neubert, quickly went viral, with millions of views and coverage on CNN and in Forbes. It was Gen2, a new generative AI video creation tool.

“Genesis was so great,” Valenzuela said. “I think that’s exactly the type of creative outputs that we’re trying to provoke. It’s great to see those kinds of things being put out there.” He added that it’s “incredible” to know how fast the process was for the creator, but also that the amount of work that was behind it was still significant.

“I think the biggest takeaway is that this trailer, and the many more that we’ve seen coming out, are not just generated with a word, which is what most people think,” he said, pointing to the language models that “have overtaken the public discourse, where everything is reduced to chatbots where you prompt something and you get something out.”

Instead, he explained, “you’re making videos, you’re making art — you’re making something that’s visual. It’s all about iteration and doing it multiple times until you pick the one that you like, and then double down on that.” Then, he said, you get to a point where you have a story that you piece together and create something “as beautiful and as weird as he did.”

But that whole process, Valenzuela said, “might be misunderstood — as if AI is some sort of automated system that creates everything for you.” Unlike his 2012 interactive art project, it is not possible to simply type a few words and get a fully fleshed-out trailer or movie.

“That’s a very reductionist view of how filmmaking works, but secondly, how art works,” he pointed out. “Just because you have a canvas and paint, you’re not going to become an artist. You need to paint a lot.”

At the intersection of art and technology

When I asked Valenzuela if it feels strange being in the middle of the conversation around the intersection of art and technology, he said that it does — particularly since the three founders come from exactly that background. What feels different these days, he said, is the mainstream conversation.

“It’s great to see that this has piqued the interest of more people, that more people are questioning what the role of technology like AI is, and the role of art,” he said. “We’ve been working on this for so much time, and we have so many insights on how to best drive both the technology and the conversations forward. I think we need to do that more broadly now that it’s become more mainstream.”

What he wants, Valenzuela emphasized, is for people to experiment with Runway’s tools before passing judgment.

“There’s a lot of human agency behind it, perhaps way more than if you used any other tool,” he said. “We need to get more people to use it, because the misconceptions might come from a place of never actually having used something like this because the technology didn’t exist six months ago.” These days, he added, he spends most of his time “just getting people to experiment with it,” as though it were a new camera.

“If you want to understand how it works, use it,” he said. “This thing is not magical on its own. It’s not going to create a movie; you need to have control over it.”

That experimentation and nuance, he added, applies to the entire way AI as a technology is perceived. “It’s a very nuanced world and I want to make sure we don’t trap ourselves and industries that we care a lot about, like filmmaking, into one story about how we collectively think about technology,” he said. “We’re in a moment right now where [AI] is going to change a lot of things. We need more diversity of thought, we need more people with different backgrounds, we need more people from different disciplines speaking about it, and not just one set of people.”

That sounded similar to Valenzuela’s own story of bringing art and technology together. “I’ve never been a fan of siloing disciplines — like ‘you’re a painter’ or ‘you’re in sculpture,’” he said. “You’re whatever you want to be. Anyone can be an artist if you’re using something to express a view of the world.”


Originally appeared on: TheSpuzz

Scoophot
Logo