Sam Altman says Elon Musk’s mind games were damaging OpenAI
Sam Altman testified in court that Elon Musk’s psychological tactics and mind games were damaging to OpenAI, revealing a bitter corporate drama as Musk’s lawsuit against the AI company unfolds in a Sa
The Chainsaw and the Cult: Inside Sam Altman’s Explosive Testimony on How Elon Musk Tried to Gut OpenAI
The courtroom in San Francisco has become a stage for one of the most bitter corporate dramas in artificial intelligence history. This week, Sam Altman delivered a performance that reframed the entire narrative. During testimony in Elon Musk’s ongoing lawsuit against OpenAI, the company’s CEO painted a devastating portrait of his former co-founder—not as a visionary philanthropist who was betrayed, but as a control-obsessed billionaire who demanded his lieutenants take a “chainsaw” to the research culture that made OpenAI a world-changing institution [1]. The image is visceral: Greg Brockman, OpenAI’s president, and Ilya Sutskever, the company’s former chief scientist, were instructed to rank their researchers by accomplishment and then methodically cut through the ranks, eliminating those who didn’t measure up to Musk’s standards [1].
This is not the story of a noble founder betrayed by greedy profiteers. It is the story of a man who, according to Altman, wanted to turn the world’s most ambitious AI research lab into a personal fiefdom—and when he couldn’t, tried to burn it down.
The Chainsaw Management Doctrine
Altman’s testimony, as reported by The Verge, reveals a management philosophy that starkly opposes the collaborative, open-research ethos that defined OpenAI’s early years [1]. The directive was simple and brutal: force Brockman and Sutskever to create a ranked list of every researcher, then “take a chainsaw through a bunch” of them [1]. The language is unmistakably Muskian—a blend of engineering efficiency and ruthless optimization applied to human capital. But Altman described this not as a one-time purge but as a pattern of behavior that did “huge damage” to the company’s culture, eroding the trust and intellectual freedom that had attracted some of the brightest minds in AI to OpenAI [1].
The implications are profound. OpenAI, founded in 2015, was explicitly structured as a nonprofit dedicated to developing artificial general intelligence (AGI) for humanity’s benefit [4]. The organization’s early pitch to researchers was that they could work on the most important technology in human history without the profit-driven pressures of Google or Facebook. Musk’s “chainsaw” approach would have fundamentally altered that equation. Researchers who had joined to pursue long-term, high-risk ideas would have been evaluated on metrics prioritizing short-term deliverables—a recipe for driving away the talent that made OpenAI’s breakthroughs possible.
What makes this revelation particularly damning is the source. Altman is not a neutral observer; he is the defendant in a lawsuit where Musk alleges that Altman and Brockman deceived him into donating $38 million to the company under false pretenses [3]. Yet Altman’s testimony about Musk’s management style is corroborated by Musk’s broader behavior at Tesla, SpaceX, and, most notoriously, Twitter (now X). The “chainsaw” metaphor evokes the mass layoffs Musk executed at Twitter in 2022, where he cut roughly 80% of the workforce with little regard for institutional knowledge or morale.
The Wired coverage of the trial adds another layer of absurdity. Altman testified that Musk had a “hair-raising” idea of passing OpenAI onto his own children, effectively treating the company as a dynastic inheritance rather than a public-benefit research organization [2]. This revelation suggests that Musk’s vision for OpenAI was never about democratic access to AGI or the safe distribution of transformative technology. It was about control—personal, familial, and absolute.
The Financial Stakes and the Forgotten $38 Million
To understand why this trial matters beyond the personal drama, one must grasp the staggering financial figures at play. Musk’s lawsuit alleges that he was deceived into donating $38 million to OpenAI [3]. For context, that sum is roughly equivalent to what OpenAI now spends on compute in a single week. The company’s valuation has soared to an estimated $134 billion, with some analysts projecting a potential $1 trillion market capitalization if it successfully commercializes AGI [3]. The gap between Musk’s initial investment and the current value of the company he helped found is vast—$38 million versus a potential $1.75 trillion enterprise [3].
Musk’s legal team has attempted to frame this as a case of fraud and deception, arguing that Altman and Brockman promised to keep OpenAI a nonprofit dedicated to humanity’s benefit, then pivoted to a for-profit structure that enriched themselves and their investors [3]. But Altman’s testimony flips this narrative. If Musk was so concerned about OpenAI’s mission, why did he demand a management style that would have decimated the research team? Why did he propose passing the company to his children like a hereditary title? The cognitive dissonance is staggering.
The TechCrunch analysis of the trial notes that Musk’s legal strategy may ultimately hinge on whether OpenAI’s for-profit subsidiary enhances or detracts from the company’s founding mission of ensuring that humanity benefits from AGI [4]. This is a genuinely difficult question, and one that the AI industry has grappled with since OpenAI announced its transition to a “capped-profit” structure in 2019. But Musk’s lawsuit, by focusing on his personal grievances and alleged deception, risks obscuring the more important structural questions about how AI companies should be governed.
What the mainstream media has largely missed is that Musk’s lawsuit attempts to retroactively rewrite the history of OpenAI’s founding. The narrative that Musk was a benevolent benefactor betrayed by greedy co-founders is appealing in its simplicity, but Altman’s testimony suggests a far messier reality. Musk was not a passive investor; he was an active, demanding, and often destructive presence who wanted OpenAI to operate according to his personal playbook.
The Poaching Attempt That Changes Everything
Perhaps the most explosive revelation came not from Altman’s testimony but from Shivon Zilis, a former OpenAI board member who now works closely with Musk. According to MIT Technology Review’s coverage, Zilis revealed that Musk attempted to poach Sam Altman—to hire him away from OpenAI and bring him under Musk’s direct control [3]. This detail, buried in the second week of testimony, fundamentally alters the calculus of the lawsuit.
Think about what this means. Musk is suing Altman for allegedly deceiving him and betraying OpenAI’s mission. Yet, according to Zilis’s testimony, Musk simultaneously tried to recruit Altman to work for him. If Musk genuinely believed Altman was a fraud who had corrupted OpenAI’s mission, why would he want Altman on his team? The only logical explanation is that Musk’s lawsuit is not about mission integrity or the public good. It is about control, ego, and the inability to accept that OpenAI succeeded without him.
This revelation also casts Musk’s subsequent founding of xAI in a different light. Musk launched xAI in 2023, positioning it as a direct competitor to OpenAI and a champion of “truth-seeking” AI. But if Musk tried to poach Altman before founding xAI, it suggests that his initial instinct was not to build a competitor but to reassert control over the company he had lost. Only when that failed did he pivot to the “rival” narrative that now forms the basis of his public persona.
The trial has also put OpenAI’s safety record under intense scrutiny [4]. Musk’s legal team has argued that the for-profit structure incentivizes OpenAI to cut corners on safety in pursuit of revenue. This is a legitimate concern, and one that the AI safety community has raised repeatedly. But the irony is thick: the man who wanted to “take a chainsaw” to OpenAI’s research team now positions himself as the guardian of AI safety.
The Open-Source Paradox and the Developer Ecosystem
While the courtroom drama unfolds, OpenAI’s actual products continue to dominate the AI landscape. The company’s open-source models have seen remarkable adoption: gpt-oss-20b has been downloaded over 7.18 million times on HuggingFace, while the larger gpt-oss-120b variant has accumulated more than 4.37 million downloads. The whisper-large-v3-turbo speech recognition model has been downloaded over 7 million times. These numbers tell a story that the lawsuit cannot touch: developers are voting with their downloads, and OpenAI’s technology is deeply embedded in the infrastructure of the AI ecosystem.
The company’s API remains a critical tool for developers, providing access to GPT-3, GPT-4, and Codex—the latter of which translates natural language into code. The OpenAI Downtime Monitor, a free tool that tracks API uptime and latencies for various OpenAI models and other LLM providers, has become an essential resource for the developer community. This is the reality that Musk’s lawsuit cannot change: OpenAI has built a platform that millions of developers rely on, and no amount of legal maneuvering can undo that.
But the trial does raise uncomfortable questions about the concentration of power in the AI industry. OpenAI, despite its open-source contributions, remains a highly centralized organization with enormous influence over the direction of AI research. The company’s transition from nonprofit to capped-profit to full public benefit corporation has been controversial. Musk’s lawsuit, for all its personal grievances, touches on a genuine tension: how do we ensure that the most powerful AI systems ever built are developed in the public interest?
The answer is not simple. Musk’s vision of AI governance—control by a single billionaire, passed down to his children—is dystopian. But the alternative, as embodied by OpenAI’s current structure, is also imperfect. The company’s board structure, which gives a nonprofit foundation partial control over a for-profit public benefit corporation, is a Rube Goldberg machine of corporate governance that few people fully understand.
The Macro Trend: Billionaire Feuds and the Future of AI Governance
What this trial ultimately reveals is the fundamental instability of the current AI governance model. The development of artificial general intelligence is arguably the most important project in human history, yet a small group of billionaires and their personal feuds direct it. Musk, Altman, and the other figures at the center of this drama are not neutral stewards of humanity’s future; they are flawed, ambitious, and often petty individuals whose personal animosities shape the trajectory of the technology.
The trial has exposed the myth of the benevolent tech founder. Musk’s public persona as a champion of open science and AI safety crumbles under the weight of Altman’s testimony about the “chainsaw” management style. Altman’s image as a careful, mission-driven leader is complicated by the for-profit pivot and the network of financial investments that Musk’s lawyers have scrutinized [2]. Neither man emerges from this trial as a hero. Both are revealed as deeply human—driven by ego, ambition, and the desire for control.
For the broader AI industry, the implications are sobering. If the two most powerful figures in AI cannot resolve their differences without resorting to multi-billion-dollar lawsuits, what hope is there for the kind of international cooperation that AI safety experts say is necessary? The trial is a distraction from the real work of building safe, beneficial AI systems, and it consumes resources and attention that could be better spent elsewhere.
Yet there is also a perverse value in this public airing of grievances. The trial has forced OpenAI to open its books and its history to scrutiny in ways that would never have happened otherwise. We now know more about the internal dynamics of the company’s founding. We have a clearer picture of Musk’s management philosophy and its destructive impact. And we have a deeper understanding of the tensions between mission and profit that define the modern AI industry.
The Verdict That Matters
As the trial enters its third week, the legal outcome remains uncertain. Musk’s $38 million claim is a rounding error in the context of OpenAI’s current valuation, and the legal standard for proving fraud is high. But the trial has already delivered a verdict in the court of public opinion: the myth of Elon Musk as the benevolent godfather of AI has been shattered.
Altman’s testimony about the “chainsaw” directive, the “hair-raising” idea of passing OpenAI to Musk’s children, and the attempted poaching of Altman himself paints a picture of a man who wanted not to save humanity but to control it. The lawsuit, for all its legal complexity, is ultimately about Musk’s inability to accept that OpenAI succeeded on its own terms, without his direct oversight.
The real question that emerges from this trial is not whether Musk was deceived, but whether the current governance structure of AI companies is adequate to the task of developing AGI safely. The answer, based on the evidence presented in court, is a resounding no. We have billionaires suing each other over control of the most important technology in human history, while the rest of the world watches and hopes that the outcome doesn’t lead to catastrophe.
OpenAI’s models continue to be downloaded millions of times. The API continues to serve developers. The company continues to push the frontier of what AI can do. But the trial has revealed a rot at the core of the enterprise—a rot that is not unique to OpenAI but endemic to the entire AI industry. We have built a system where the fate of humanity’s most important technology is decided by the personal feuds of a handful of billionaires. That is not a sustainable model for the future.
The chainsaw has been put away, but the damage it did to OpenAI’s culture—and to the broader trust in AI governance—will take years to repair. If there is a lesson from this trial, it is that the future of AI is too important to be left to the whims of billionaires and their lawyers. The rest of us need to start paying attention, because the outcome of this trial will shape the future of intelligence itself—whether we like it or not.
References
[1] Editorial_board — Original article — https://www.theverge.com/ai-artificial-intelligence/928861/openai-sam-altman-elon-musk-damage
[2] Wired — Elon Musk Had ‘Hair-Raising’ Idea of Passing OpenAI Onto His Kids, Sam Altman Says — https://www.wired.com/story/sam-altman-testifies-musk-v-altman-trial/
[3] MIT Tech Review — Musk v. Altman week 2: OpenAI fires back, and Shivon Zilis reveals that Musk tried to poach Sam Altman — https://www.technologyreview.com/2026/05/08/1137008/musk-v-altman-week-2-openai-fires-back-and-shivon-zilis-reveals-that-musk-tried-to-poach-sam-altman/
[4] TechCrunch — Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope — https://techcrunch.com/2026/05/07/elon-musks-lawsuit-is-putting-openais-safety-record-under-the-microscope/
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
A conversation with Kevin Scott: What’s next in AI
In a late 2022 interview, Microsoft CTO Kevin Scott calmly discussed the next phase of AI without product announcements, offering a prescient look at the long-term strategy behind the generative AI ar
Fostering breakthrough AI innovation through customer-back engineering
A growing body of evidence shows that enterprise AI innovation is broken when focused solely on algorithms and infrastructure, so this article explains how customer-back engineering—starting with user
Google detects hackers using AI-generated code to bypass 2FA with zero-day vulnerability
On May 13, 2026, Google's Threat Analysis Group confirmed state-sponsored hackers used AI-generated exploit code to weaponize a zero-day vulnerability, bypassing two-factor authentication on Google ac