OTHER

The Fixer’s Dilemma: Chris Lehane and the Major Challenges of OpenAI

Chris Lehane is adept at handling negative publicity. Previously serving as Al Gore’s press secretary during the Clinton administration and as Airbnb’s chief strategist during significant regulatory hurdles, he possesses a deep knowledge of public relations. Now, in his role as OpenAI’s VP of global policy, he faces perhaps his most significant challenge: convincing the public that OpenAI genuinely aims to democratize artificial intelligence, even as its actions increasingly align with those of other tech giants that once claimed to be different.

I had a brief 20-minute conversation with him on stage at the Elevate conference in Toronto earlier this week, seeking to delve deeper than superficial remarks and examine the authentic contradictions that threaten OpenAI’s carefully curated image. This proved to be a complex and somewhat elusive endeavor. Lehane is clearly skilled in his role. He presents himself as approachable and logical, acknowledges uncertainties, and conveys his concerns about whether their efforts will effectively benefit society.

However, good intentions can mean little when your company is subpoenaing critics, leveraging economically vulnerable communities for resources, and resurrecting deceased celebrities to maintain its dominance in the market.

The controversy centers on OpenAI’s newly launched Sora tool, which appears to have utilized copyrighted materials without consent. This move was especially risky for a company already involved in legal battles with The New York Times, The Toronto Star, and a considerable portion of the publishing sector. From a business perspective, the strategy succeeded; the exclusive app rapidly rose to the top of the App Store as users created digital reproductions of themselves, OpenAI CEO Sam Altman, iconic figures like Pikachu and Mario, and even deceased celebrities like Tupac Shakur.

When I asked why Sora was introduced with these specific characters, Lehane returned to the conventional narrative: Sora is a “general-purpose technology,” akin to electricity or the printing press, democratizing creativity for those lacking talent or resources. He asserted on stage that even he—who claims to lack creative prowess—can now produce videos.

What he failed to mention is that OpenAI originally allowed rights holders to opt out of their works being incorporated into Sora’s training data, which diverged from standard copyright practices. Eventually, acknowledging user preference for copyrighted images, OpenAI transitioned to an opt-in model. This isn’t progress; it merely tests the boundaries of legality. (Despite facing some recent threats from the Motion Picture Association, OpenAI has largely escaped serious consequences.)

Understandably, this situation frustrates publishers who argue that OpenAI profits from their work without compensation. When I pressed Lehane on this economic exclusion, he referenced fair use, the U.S. legal doctrine intended to balance creator rights with public access to information, claiming it as a hidden advantage for U.S. tech dominance.

TechCrunch event

San Francisco
|
October 27-29, 2025

Perhaps. However, after a recent conversation with Al Gore—Lehane’s former boss—I noted that anyone could easily turn to ChatGPT instead of reading my article on TechCrunch. I remarked, “It’s ‘iterative,’ but it can also serve as a substitute.”

For the first time, Lehane set aside his rehearsed response. “We all need to figure this out,” he acknowledged. “It’s easy to claim from up here that new economic revenue models are necessary. But I believe we will.” (Essentially, they seem to be improvising.)

Additionally, there remains the largely unaddressed issue of infrastructure. OpenAI is already operating a data center in Abilene, Texas, and has recently started constructing another major facility in Lordstown, Ohio, in collaboration with Oracle and SoftBank. Lehane has drawn parallels between AI access and the advent of electricity—suggesting that late adopters still struggle to adapt—yet OpenAI’s Stargate initiative seems to be focusing on economically distressed areas for their expansive facilities, which require significant water and electricity.

When I asked whether these communities would benefit or merely bear the costs, Lehane pivoted to discussing gigawatts and geopolitics. He pointed out that OpenAI needs about a gigawatt of energy weekly. Meanwhile, China added 450 gigawatts last year and inaugurated 33 nuclear plants. If democracies wish for democratic AI, they must remain competitive. “The optimist in me believes this will modernize our energy systems,” he stated, envisioning a rejuvenated America with enhanced power grids.

While commendable, this did not address the reality that residents of Lordstown and Abilene might face rising utility expenses while OpenAI generates videos featuring historical figures like John F. Kennedy and The Notorious B.I.G. (Video generation is notably energy-intensive.)

This led me to bring up a particularly challenging example. Zelda Williams had spent the day before our discussion urging Instagram users to refrain from sharing AI-generated clips of her late father, Robin Williams. “You’re not creating art,” she stated. “You’re producing grotesque, overly processed replicas of real human lives.”

When I asked how the company reconciles such personal harm with its mission, Lehane discussed various strategies, including responsible design and governmental collaborations. “There’s no handbook for navigating these issues, right?”

Lehane opened up somewhat, admitting that he sometimes wakes up at 3 a.m. worrying about democratization, geopolitics, and infrastructure. “With this comes immense responsibility.”

Whether those moments were genuine or part of an act, I found him credible. Indeed, I left Toronto feeling like I had witnessed a masterclass in political communication—Lehane expertly navigating a complex terrain while skillfully sidestepping inquiries about company decisions he may not actually support. Then, on Friday, a notable event unfolded.

Nathan Calvin, an attorney specializing in AI policy at the nonprofit group Encode AI, revealed that while I was engaging with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to his home in Washington, D.C., during dinner to serve a subpoena. They sought his private conversations with California lawmakers, college students, and former OpenAI employees.

Calvin accuses OpenAI of employing intimidation tactics regarding California’s SB 53, a new AI regulation. He alleges that the company has weaponized its legal disputes with Elon Musk as a pretext to target critics, suggesting that Encode was secretly funded by Musk. In fact, Calvin states he opposed OpenAI’s resistance to SB 53, which emphasizes AI safety, and expressed incredulity at their assertion that they “worked to enhance the bill.” In a social media thread, he labeled Lehane as the “master of the political dark arts.”

In Washington, that might be seen as a compliment. But for a company like OpenAI, whose mission is “to build AI that benefits all of humanity,” it feels more like an indictment.

What’s even more revealing is that even OpenAI’s own employees hold conflicting views about the company’s evolving identity.

As my colleague Max reported last week, several current and former employees expressed their concerns on social media following the launch of Sora 2. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who remarked that while Sora 2 is “technically impressive, it’s premature to congratulate ourselves for avoiding the pitfalls of other social media platforms and deepfakes.”

On Friday, Josh Achiam—OpenAI’s head of mission alignment—made an even bolder statement regarding Calvin’s claims. Leading with the candid recognition that his comments posed “potential risks to my entire career,” Achiam stated of OpenAI: “We can’t afford to act in ways that portray us as a frightening power rather than a virtuous one. We have a duty and a mission for all of humanity, and the expectations for that duty are exceptionally high.”

That’s a significant revelation. An OpenAI executive openly questioning whether his organization is transforming into “a frightening power rather than a virtuous one” is distinct from critics making allegations or journalists seeking answers. This insight comes from someone who has chosen to align with OpenAI, believes in its vision, yet now grapples with a moral dilemma, even at the risk of his career.

This is a crucial moment. One can be a leading political strategist in tech, skillfully navigating intricate circumstances, yet still find themselves within a company whose actions increasingly contradict its stated values—contradictions that may intensify as OpenAI pursues artificial general intelligence.

It leads me to wonder if the essential question centers around Chris Lehane’s ability to advocate for OpenAI’s mission. The more pressing inquiry is whether others—especially those within the organization—still believe in it.