The Fixer’s Dilemma: Chris Lehane and the Crucial Challenges Confronting OpenAI
Chris Lehane is adept at handling negative publicity. With a background as press secretary for Al Gore during the Clinton administration and as Airbnb’s chief strategist during tough regulatory challenges, he brings substantial public relations expertise to the table. Currently serving as OpenAI’s VP of global policy, he faces a significant challenge: persuading the public that OpenAI is committed to democratizing artificial intelligence, even as its actions increasingly resemble those of major tech players that once positioned themselves as alternatives.
I had a brief 20-minute conversation with him on stage at the Elevate conference in Toronto earlier this week, aiming to delve deeper than surface-level responses and examine the genuine contradictions that jeopardize OpenAI’s carefully curated image. This proved to be a complicated and somewhat elusive endeavor. Lehane exhibits a clear command of his role, appearing approachable and rational, acknowledging uncertainties, and expressing concerns about whether their initiatives will genuinely benefit society.
However, goodwill can become insignificant when a company is subpoenaing its critics, leveraging economically marginalized communities for resources, and reanimating deceased public figures to stay competitive.
The discussion centers around OpenAI’s newly introduced Sora tool, which has allegedly utilized copyrighted materials without appropriate permissions. This is particularly perilous for a company already entangled in legal disputes with The New York Times, The Toronto Star, and a large segment of the publishing industry. From a business perspective, the strategy succeeded; the exclusive app quickly ascended to the top of the App Store as users generated digital avatars of themselves, OpenAI CEO Sam Altman, iconic figures like Pikachu and Mario, and even late personalities such as Tupac Shakur.
When I asked why Sora featured these specific characters, Lehane retreated to the typical narrative: Sora is a “general-purpose technology,” similar to electricity or the printing press, intended to democratize creativity for those lacking talent or resources. He claimed on stage that even he—who professes to lack creative ability—can now produce videos.
What he failed to mention is that OpenAI initially allowed rights holders to opt out of having their works included in Sora’s training data, deviating from conventional copyright practices. Eventually, acknowledging users’ preferences for copyrighted content, OpenAI switched to an opt-in model. This isn’t advancement; it merely tests legal limits. (Despite facing recent threats from the Motion Picture Association, OpenAI has mostly evaded serious consequences.)
As a result, this situation understandably frustrates publishers who argue that OpenAI profiting from their work without compensating them. When I pressed Lehane about this economic exclusion, he pointed to fair use, the U.S. legal doctrine designed to balance creator rights with public access to information, suggesting it as a hidden advantage for U.S. technological dominance.
TechCrunch event
San Francisco
|
October 27-29, 2025
Perhaps. However, after a recent discussion with Al Gore—Lehane’s former boss—I pointed out that anyone could easily turn to ChatGPT instead of reading my article on TechCrunch. I noted, “It’s ‘iterative,’ but it can also function as a substitute.”
For the first time, Lehane deviated from his memorized response. “We all need to figure this out,” he conceded. “It’s easy to claim from up here that new economic models are necessary. But I believe we will.” (Essentially, they seem to be improvising.)
Additionally, there’s the largely overlooked issue of infrastructure. OpenAI is currently developing a data center in Abilene, Texas, and has recently commenced building another major facility in Lordstown, Ohio, in collaboration with Oracle and SoftBank. Lehane has compared AI access to the arrival of electricity—suggesting that late adopters will still face challenges—yet OpenAI’s Stargate initiative appears to be targeting economically distressed regions for their large facilities, which require significant water and electricity.
When I asked if these communities would benefit or simply bear the costs, Lehane shifted to discussing gigawatts and geopolitics. He mentioned that OpenAI needs approximately a gigawatt of energy weekly. Meanwhile, China added 450 gigawatts the previous year and opened 33 nuclear plants. If democracies aspire to have democratic AI, they must stay competitive. “The optimist in me believes this will modernize our energy systems,” he stated, envisioning a reinvigorated America with improved power grids.
While commendable, this did not address the reality that residents in Lordstown and Abilene might face rising utility costs while OpenAI creates videos featuring historical figures like John F. Kennedy and The Notorious B.I.G. (Video generation notably demands substantial energy.)
This prompted me to mention a particularly poignant example. Zelda Williams spent the day before our discussion urging Instagram users to refrain from sharing AI-generated videos of her late father, Robin Williams. “You’re not creating art,” she stated. “You’re producing grotesque, overly processed replicas of real human lives.”
When I inquired how the company reconciles such personal distress with its mission, Lehane outlined various strategies, including responsible design and governmental partnerships. “There’s no manual for addressing these issues, right?”
Lehane opened up slightly, admitting that he occasionally wakes up at 3 a.m. pondering democratization, geopolitics, and infrastructure. “With this comes immense responsibility.”
Whether those moments are authentic or part of a performance, I found him credible. Indeed, I left Toronto feeling as though I had witnessed a masterclass in political communication—Lehane skillfully navigating a complex landscape while adeptly sidestepping questions about decisions he might not genuinely support. Then, on Friday, a significant event transpired.
Nathan Calvin, an attorney specializing in AI policy at the nonprofit group Encode AI, revealed that while I was engaged with Lehane in Toronto, OpenAI had dispatched a sheriff’s deputy to his home in Washington, D.C., during dinner to serve a subpoena. They sought his private discussions with California lawmakers, college students, and former OpenAI employees.
Calvin accuses OpenAI of employing intimidation tactics regarding California’s SB 53, a new AI regulation. He alleges that the company has weaponized its legal disputes with Elon Musk as a pretext to target critics, suggesting that Encode was secretly funded by Musk. In fact, Calvin asserts he opposed OpenAI’s resistance to SB 53, which underscores AI safety, and expressed disbelief at their claim that they “worked to enhance the bill.” On social media, he labeled Lehane as the “master of the political dark arts.”
In Washington, that might be seen as a compliment. But for a company like OpenAI, whose mission is “to build AI that benefits all of humanity,” it feels more like a condemnation.
What’s even more revealing is that even OpenAI’s own employees hold conflicting views about the company’s changing identity.
As my colleague Max reported last week, several current and former employees voiced their concerns on social media following the launch of Sora 2. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who remarked that while Sora 2 is “technically impressive, it’s too early to congratulate ourselves for avoiding the pitfalls of other social media platforms and deepfakes.”
On Friday, Josh Achiam—OpenAI’s head of mission alignment—made an even more audacious statement regarding Calvin’s claims. Acknowledging that his comments posed “potential risks to my entire career,” Achiam stated that OpenAI must not act in ways that paint them as a frightening power rather than a virtuous one. “We have a duty and a mission for all of humanity, and the expectations for that duty are extraordinarily high.”
That’s a significant revelation. An OpenAI executive openly questioning whether his organization is evolving into “a frightening power rather than a virtuous one” is distinct from external critics making allegations or journalists seeking answers. This insight comes from someone who has chosen to align with OpenAI, believes in its vision, yet is grappling with a moral dilemma, even at the potential cost of his career.
This is a critical moment. One can be a leading political strategist in technology, skillfully navigating intricate situations, yet still find themselves within a company whose actions increasingly contradict its stated values—contradictions that may intensify as OpenAI pursues artificial general intelligence.
It leads me to wonder if the fundamental question centers around Chris Lehane’s ability to advocate for OpenAI’s mission. The more urgent inquiry is whether others—especially those within the organization—still have faith in it.