OpenAI unveils video AI model Sora capable of generating 60-second clips

OpenAI is not content with just being known as the ChatGPT or even LLM company: today it unveiled a demo of Sora, its new AI text-to-video generation model, with co-founder and CEO Sam Altman posting on X (formerly Twitter) that it was a “remarkable moment.”

While the product is not officially usable by the masses yet due to what Altman said in his post was “starting red-teaming,” or oppositional testing of its security defenses, flaws and misuses, the founder did note that it was being made available to a “limited number of creators,” with public expansion to come at a later date.

Intensely competitive space for video AI models

Sora is entering an intensely competitive space with existing rival startups Runway, Pika and Stability AI offering dedicated AI video generation models, as well as stalwarts such as Google showing of its Lumiere model capabilities. OpenAI’s sample videos of Sora it shared today stand out for the sharpness of their resolution, smoothness of motion, human anatomical and physical world accuracy, and most of all, run-time.

Unlike Runway and Pika, which offer just 4 seconds of generation at a time with options to expand, OpenAI’s Sora offers 60-second video generations right off the bat.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 

Request an invite

Altman and other members of OpenAI’s leadership and Sora team including researcher Will Depue are collecting prompts from users on Twitter/X that they are running through Sora now as a kind of live, crowdsourced demo of the model’s new capabilities, so go over and submit some to them if you are interested (I did).


Originally appeared on: TheSpuzz

Scoophot
Logo