Why Runway is Focusing on the Robotics Industry for Future Revenue Expansion
For the past seven years, Runway has been at the forefront of visual-generation technologies tailored for the creative industry. The company is now venturing into robotics as a fresh path for innovation.
Located in New York, Runway is celebrated for its AI-powered video and photo generation models that utilize large language models to recreate reality with stunning accuracy. Notable advancements include the March launch of Gen-4 for video generation and the July introduction of Runway Aleph, a video editing model.
As Runway’s models have progressed to resemble real life more closely, interest from robotics and autonomous vehicle manufacturers has surged. Anastasis Germanidis, the co-founder and CTO of Runway, shared his perspectives on this transition with TechCrunch.
“We recognize that our capacity to simulate the world holds vast potential beyond entertainment, even though entertainment remains a key and expanding focus for us,” Germanidis stated. “This capability enhances the scalability and cost-effectiveness of training [robotic] algorithms that interact with real-world environments, whether in robotics or self-driving technology.”
Germanidis noted that the exploration of robotics and autonomous vehicles was not part of Runway’s initial vision at its inception in 2018. It was through outreach from these sectors that the company understood its models have a broader range of applications than originally expected.
Robotics companies are utilizing Runway’s technology for diverse training scenarios. Germanidis highlighted that training robots and autonomous vehicles in real-world environments is costly, labor-intensive, and challenging to scale.
While Runway acknowledges the limitations of replacing traditional real-world training, Germanidis asserted that companies can gain significant advantages from simulations produced with Runway’s models due to their precision.
TechCrunch Event
San Francisco
|
October 27-29, 2025
In contrast to real-world training, the application of these models enables precise testing of specific variables and scenarios while controlling other factors, he added.
“You can step back and simulate the outcomes of various actions,” he explained. “If a car takes this turn instead of that one, or executes this maneuver, what happens? Creating such scenarios in a consistent context is difficult in real life, as it’s hard to maintain all other environmental variables while assessing the impact of a specific action.”
Runway is not alone in this arena; Nvidia recently introduced the latest version of its Cosmos world models along with new robotic training frameworks earlier this month.
According to Germanidis, the company does not plan to launch a distinct line of models specifically for robotics and autonomous driving clients. Instead, Runway aims to enhance its existing models for these fields and is also focused on building a dedicated robotics team.
Germanidis mentioned that while these sectors weren’t part of the company’s initial pitch to investors, they have responded positively to this new direction. Runway has raised over $500 million from investors including Nvidia, Google, and General Atlantic, achieving a valuation of $3 billion.
“We view our company as grounded in a principle rather than merely reacting to market demands,” Germanidis remarked. “This principle revolves around simulation—enhancing the representation of the world. Once we develop powerful models, we can apply them across diverse markets and industries. We believe the sectors we have identified are already well-positioned and will continue to evolve, driven by the capabilities of generative models.”