OTHER

Reflection AI Raises $2B to Establish Itself as America’s Leading Open Frontier AI Lab, Challenging DeepSeek

Reflection AI, a startup founded last year by two former Google DeepMind researchers, has successfully raised $2 billion in funding, elevating its valuation to $8 billion—a significant jump from $545 million just seven months prior. Originally focusing on autonomous coding agents, the company is now positioning itself as an open-source alternative to major players like OpenAI and Anthropic, while also acting as a Western counterpart to Chinese AI firms like DeepSeek.

Established in March 2024 by Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, a co-creator of AlphaGo—the AI that famously won against the world champion in Go in 2016—the startup’s deep expertise is central to its narrative, showcasing that talented AI professionals can develop leading-edge models independent of major tech firms.

Alongside the funding, Reflection AI announced the formation of a team of esteemed experts from DeepMind and OpenAI, which has built a sophisticated AI training infrastructure available to the public. Importantly, Reflection AI has unveiled a scalable business model that aligns with its open intelligence vision.

As of now, Reflection AI employs approximately 60 individuals, primarily researchers and engineers with expertise in infrastructure, data training, and algorithm development, as noted by CEO Misha Laskin. The company has developed a compute cluster and plans to launch a cutting-edge language model next year, trained on “tens of trillions of tokens,” as Laskin shared with TechCrunch.

“We have accomplished something once believed to be possible only within the world’s leading laboratories: a large-scale LLM and reinforcement learning platform capable of training significant Mixture-of-Experts (MoEs) models at the frontier level,” Reflection AI stated in a post on X. “Our direct experience highlighted the success of our method in the essential field of autonomous coding. With this milestone achieved, we are now extending these techniques to general agentic reasoning.”

MoE is a unique architecture that supports frontier LLMs—systems that were previously manageable only by large, closed AI labs. DeepSeek marked a crucial advancement by discovering how to train these models openly at scale, a trend that others like Qwen, Kimi, and additional models in China have since followed.

“DeepSeek, Qwen, and similar models challenge us because inactivity risks allowing others to define the global standard of intelligence,” Laskin declared. “It won’t be shaped by America.”

Techcrunch event

San Francisco
|
October 27-29, 2025

Laskin emphasized that this scenario places the U.S. and its allies at a disadvantage, as businesses and governments often hesitate to utilize Chinese models due to potential legal risks.

“You can either accept a competitive disadvantage or rise to the challenge,” Laskin observed.

The American tech community has largely welcomed Reflection AI’s new goals. David Sacks, the White House AI and Crypto Czar, remarked on X: “It’s encouraging to witness a surge in American open-source AI models. A significant portion of the global market will appreciate the cost, customizability, and control that open-source solutions provide. We want the U.S. to succeed in this arena as well.”

Clem Delangue, co-founder and CEO of Hugging Face, an open platform for AI developers, told TechCrunch, “This is indeed fantastic news for American open-source AI.” He added, “The upcoming challenge will be to facilitate the rapid sharing of open AI models and datasets in alignment with leading open-source AI labs.”

Reflection AI’s take on “open” appears to focus on accessibility rather than development, mirroring approaches utilized by Meta with Llama or Mistral. Laskin noted that Reflection AI would make model weights—the essential parameters affecting AI performance—publicly available but will largely keep datasets and complete training frameworks proprietary.

“Essentially, the most critical aspect is the model weights, as they allow anyone to utilize and experiment with them,” Laskin elaborated. “The infrastructure stack remains practical solely for a select few companies.”

This equilibrium also underpins Reflection AI’s business model. Researchers will have complimentary access to the models, while the company will generate revenue from large enterprises developing products based on Reflection AI’s models and governments creating “sovereign AI” systems—AI models developed and governed by individual nations.

“When engaging with large businesses, they naturally seek an open model,” Laskin pointed out. “They desire something they can control, operate on their infrastructure, manage costs effectively, and customize for varied workloads. Given the considerable investment in AI, they seek to optimize it to the fullest, and that’s the market we plan to target.”

Reflection AI has yet to release its initial model, which will primarily emphasize text with intentions for multimodal capabilities in the future, as stated by Laskin. The company plans to leverage the funds from this funding round to secure the computational resources necessary for training these forthcoming models, with the first model expected to launch early next year.

Investors participating in Reflection AI’s latest funding round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV, and others.