The original startup behind Steady Diffusion has launched generalized AI for video
Founded in 2018, Runway has been developing AI-powered video editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream TV and movie studios. Manufacturers of The Late Show with Stephen Colbert used Runway software to edit the program’s graphics; The visual effects team behind the hit movie Everything Anywhere All At Once used the company’s technology to help create certain scenes.
In 2021, Runway teamed up with researchers at the University of Munich to build the first version of Stable Diffusion. Stable AI, a UK-based startup, then joined in paying for the computing costs needed to train the model on more data. By 2022, Steady AI has become the mainstream of Stable Diffusion, transforming it from a research project into a global phenomenon.
But the two companies no longer cooperate. Getty is currently taking legal action against Stability AI—which claims the company used Getty’s images that appeared in Stable Diffusion’s training data—and Runway wants to keep its distance.
Gen-1 represents a new beginning for Runway. It follows several text-to-video models revealed late last year, including Create videos from Meta and Google’s Phenaki, both of which can create very short video clips from scratch. It is similar to Dreamix, a synthetic AI from Google revealed last week that can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, the Gen-1 seems to be a step up in terms of video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway built its model with customers. “This is one of the first models that was developed really closely with the community of video makers,” says Valenzuela. “It comes with years of insight into how VFX filmmakers and editors actually work in post-production.”
Gen-1, which runs in the cloud through Runway’s website, is currently available to a handful of invited users and will be rolled out to everyone on the waitlist in a few weeks.
Last year’s boom in artificial intelligence was fueled by millions of people getting their hands on powerful creative tools for the first time and sharing what they’ve created with them. Valenzuela hopes that getting Gen-1 into the hands of creative professionals will soon have a similar impact on video.
“We are really going to have enough feature films being produced,” he said. “We’re close to where most of the content you see online will be generated.”