Runway, one of several AI startups developing video generation technology, today announced an API that enables developers and organizations to embed the company's generative AI models into third-party platforms, apps, and services.
Access is currently limited (there is a waiting list), and the Runway API offers just one model: Gen-3 Alpha Turbo, a faster but less feature-rich version of Runway's flagship Gen-3 Alpha, and two plans: Build (for individuals and teams) and Enterprise. The base price is 1 cent per credit (5 credits per second of video), and Runway says that “trusted strategic partners” such as marketing group Omnicom are already using the API.
The Runway API also comes with unusual disclosure requirements: The company wrote in a blog post that any interface that uses the API must “prominently display” a “Powered by Runway” banner that links to Runway's website.[help] Users understand the technology behind it [applications] Please adhere to the terms of use.”
Runway, which is backed by investors including Salesforce, Google, and Nvidia and was last valued at $1.5 billion, faces stiff competition in the video generation space from OpenAI, Google, Adobe, and others. OpenAI plans to release its video generation model, Sora, in some form this fall, while startups like Luma Labs continue to refine the technology.
Image credit: Runway
The preliminary release of the Runway API makes Runway one of the first AI vendors to offer its video generative models through an API. But while the API may help the company's path to monetization (or at least recoup the high costs of training and running the models), it doesn't resolve the lingering legal questions surrounding those models, and generative AI technology in general.
Runway's video generation models, like all video generation models, are trained on vast numbers of video examples and then “learn” patterns in these videos to generate new footage. Where does the training data come from? Runway, like many vendors these days, refuses to reveal the answer for fear of sacrificing competitive advantage.
But the training details could also give rise to IP-related litigation if Runway was training on copyrighted data without permission, and there's evidence that it did: A report published by 404 Media in July exposed an internal spreadsheet of training data with links to YouTube channels for creators like Netflix, Rockstar Games, Disney, Linus Tech Tips, and MKBHD.
It's unclear whether Runway ultimately used any of the videos in the spreadsheet to train its models. In an interview with TechCrunch in June, Runway co-founder Anastasis Germanidis would only say that the company uses a “curated in-house dataset” to train its models. But if it did, it wouldn't be the only AI vendor to flirt with copyright rules.
Earlier this year, OpenAI CTO Mira Murati didn't outright deny that Sora was trained on YouTube content, and Nvidia has reportedly used YouTube videos to train an in-house video generation model called Cosmos.
Generative AI vendors believe a doctrine known as fair use gives them legal protections that others won't take the risk. Adobe is said to be offering artists compensation in exchange for clips to train its video generation models. With any luck, that will come to light soon in cases heard in court.
Whatever the outcome, one thing is clear: generative AI video tools threaten to upend the film and TV industry as we know it. A 2024 study commissioned by the Animation Guild, the union representing Hollywood animators and cartoonists, found that 75% of film productions that adopted AI reduced, consolidated, or eliminated jobs after implementing the technology. The study also estimated that more than 100,000 jobs in the U.S. entertainment industry will be destroyed by generative AI by 2026.