The Fixer’s Dilemma: Chris Lehane and the Daunting Challenge of OpenAI
Chris Lehane is a master at handling negative publicity. Previously Al Gore’s press secretary during the Clinton era and Airbnb’s chief crisis strategist amid various regulatory hurdles, he possesses a strong grasp of public relations. Now, in his current role as OpenAI’s VP of global policy, he faces perhaps his toughest challenge yet: convincing the public that OpenAI genuinely aims to democratize artificial intelligence, even as its actions increasingly resemble those of other tech giants that once claimed to be different.
I had a short 20-minute exchange with him on stage at the Elevate conference in Toronto earlier this week—an effort to delve deeper than the surface-level rhetoric into the real contradictions that threaten OpenAI’s meticulously crafted image. It was a complex and somewhat elusive endeavor. Lehane is undeniably proficient in his role. He comes across as personable, rational, acknowledges uncertainties, and even expresses his concerns about whether their initiatives will genuinely benefit society.
However, good intentions mean little when your company is subpoenaing critics, exploiting economically vulnerable communities for resources, and resurrecting deceased celebrities to maintain its market dominance.
The controversy centers around OpenAI’s newly launched Sora tool, which appeared to use copyrighted material without consent. This move was particularly risky for a company already embroiled in legal disputes with the New York Times, the Toronto Star, and a considerable portion of the publishing sector. From a business perspective, the strategy paid off; the exclusive app soared to the top of the App Store as users crafted digital representations of themselves, OpenAI CEO Sam Altman, iconic characters like Pikachu and Mario, and even late celebrities like Tupac Shakur.
When I asked why Sora was released with these particular characters, Lehane fell back on the usual narrative: Sora is a “general purpose technology,” akin to electricity or the printing press, democratizing creativity for those lacking talent or resources. Even he—who describes himself as creatively challenged—can now produce videos, he stated on stage.
What he glossed over is that OpenAI initially allowed rights holders to opt-out of having their works used in Sora’s training data, which deviates from standard copyright practices. Eventually, noticing user preference for copyrighted images, OpenAI switched to an opt-in model. This isn’t progress; it’s merely testing the limits of legality. (Despite a few recent threats from the Motion Picture Association, OpenAI has managed to avoid serious consequences.)
Such a situation understandably frustrates publishers who argue that OpenAI profits from their work without offering compensation. When I pressed Lehane on this economic exclusion, he referenced fair use, the American legal doctrine intended to balance creator rights with public access to information, claiming it as a secret weapon for U.S. tech leadership.
TechCrunch event
San Francisco
|
October 27-29, 2025
Perhaps. Yet, after recently speaking with Al Gore—Lehane’s former boss—I recognized that anyone could easily turn to ChatGPT instead of reading my article on TechCrunch. I remarked, “It’s ‘iterative,’ but it also functions as a substitute.”
For the first time, Lehane dropped his rehearsed response. “We all need to figure this out,” he acknowledged. “It’s easy to assert from up here that new economic revenue models are necessary. But I believe we will.” (Essentially, they’re improvising.)
Additionally, there’s the infrastructure issue that remains largely unaddressed. OpenAI is already operating a data center in Abilene, Texas, and has just begun construction on another major facility in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has drawn parallels between access to AI and the advent of electricity—suggesting that late adopters still struggle to keep pace—yet OpenAI’s Stargate initiative appears to be targeting economically distressed areas for their substantial facilities, which require significant water and electricity.
When I asked whether these communities would benefit or simply bear the costs, Lehane shifted to discussing gigawatts and geopolitics. He pointed out that OpenAI needs about a gigawatt of energy weekly. Concurrently, China added 450 gigawatts last year and opened 33 nuclear plants. If democracies desire democratic AI, they must remain competitive. “The optimist in me believes this will modernize our energy frameworks,” he said, picturing a revitalized America with enhanced power grids.
It was commendable. However, this did not address the reality that residents of Lordstown and Abilene might face elevated utility costs while OpenAI generates videos featuring historical figures like John F. Kennedy and The Notorious B.I.G. (Video generation is notably energy-intensive.)
This prompted me to bring up a particularly challenging example. Zelda Williams had spent the day before our discussion urging Instagram users to stop sending her AI-generated clips of her late father, Robin Williams. “You’re not creating art,” she stated. “You’re producing grotesque, overly processed copies of real human lives.”
When I asked how the company reconciles such personal harm with its mission, Lehane mentioned various strategies, including responsible design and government partnerships. “There’s no manual for dealing with these issues, right?”
Lehane revealed some vulnerability, admitting that he occasionally wakes up at 3 a.m. worried about democratization, geopolitics, and infrastructure. “With this comes tremendous responsibility.”
Whether those moments were genuine or part of an act, I found him credible. Indeed, I left Toronto thinking I had witnessed a masterclass in political communication—Lehane skillfully navigating a complex terrain while deftly avoiding inquiries about company decisions that he may not actually support. Then, on Friday, something significant transpired.
Nathan Calvin, a lawyer focused on AI policy at the nonprofit group Encode AI, discovered that while I was conversing with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to his residence in Washington, D.C., during dinner to serve a subpoena. They sought his private conversations with California lawmakers, college students, and former OpenAI employees.
Calvin accuses OpenAI of employing intimidation tactics regarding California’s SB 53, a new AI regulation. He alleges that the company has weaponized its legal conflicts with Elon Musk as a rationale to target critics, suggesting that Encode was covertly financed by Musk. In fact, Calvin says he opposed OpenAI’s resistance to SB 53, which focuses on AI safety, and laughed out loud at their claim that they “worked to improve the bill.” In a social media thread, he referred to Lehane as the “master of the political dark arts.”
In Washington, that may be viewed as a compliment. But for a company like OpenAI, whose mission is “to build AI that benefits all of humanity,” it feels more like an indictment.
What’s even more telling is that even OpenAI’s own employees are conflicted about the company’s shifting identity.
As my colleague Max reported last week, several current and former employees expressed their concerns on social media after the launch of Sora 2. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who pointed out that while Sora 2 is “technically impressive, it’s too soon to congratulate ourselves for avoiding the pitfalls of other social media platforms and deepfakes.”
On Friday, Josh Achiam—OpenAI’s head of mission alignment—made an even bolder statement regarding Calvin’s claims. Leading with a candid admission that his comments posed “potential risks to my entire career,” Achiam said of OpenAI: “We can’t afford to act in ways that portray us as a frightening power rather than a virtuous one. We have a duty and a mission for all of humanity, and the expectations for that duty are exceedingly high.”
That’s a significant revelation. An OpenAI executive openly questioning whether his organization is morphing into “a frightening power rather than a virtuous one” is not on par with critics casting accusations or journalists seeking answers. This insight comes from someone who chose to affiliate with OpenAI, believes in its vision, yet now confronts a moral quandary, even at the risk of his job.
This is a pivotal moment. You can be a top political strategist in tech, artfully navigating intricate circumstances, yet still find yourself part of a company whose actions increasingly contradict its stated values—contradictions that may intensify as OpenAI strives for artificial general intelligence.
It leaves me pondering whether the true question revolves around Chris Lehane’s capacity to champion OpenAI’s mission. The more essential inquiry is whether others—especially those within the organization—continue to believe in it.