Mira Murati tells the court that she couldn’t trust Sam Altman’s words

OpenAI's former CTO Mira Murati testified under oath that Sam Altman misrepresented safety compliance for a new model, claiming the legal department had approved standards when it had not. The deposition, surfaced in the Musk v. Altman litigation, exposes internal governance fractures at the AI industry's most visible organization and raises questions about how safety claims are validated before deployment. For stakeholders tracking AI governance maturity and corporate accountability, this signals potential gaps between public safety narratives and internal decision-making at scale.
Modelwire context
Analyst takeThe more consequential detail here is not that Altman may have misrepresented legal approval, but that Murati, as CTO, felt she could not rely on his word as a matter of routine. That describes a governance culture problem, not an isolated incident.
This testimony lands inside the Musk v. Altman trial that Modelwire has tracked closely since early May. Our coverage of week one (MIT Technology Review, May 1) framed the case as a test of whether OpenAI's leadership could be held accountable for how it communicated its mission and structure to stakeholders. Murati's deposition extends that accountability question inward: if the company's own former CTO distrusted Altman's representations about safety compliance, the governance fractures Musk's legal team is trying to surface from the outside may already be documented from the inside. The Shivon Zilis coverage (WIRED, May 1) showed how informal channels shaped information flow at OpenAI's leadership level, and Murati's account adds another data point suggesting that formal processes, including legal sign-off, were not always the actual mechanism for decisions.
Watch whether OpenAI's legal team moves to limit or challenge the admissibility of Murati's deposition in the coming weeks. If the testimony stands unchallenged in the trial record, it becomes a durable reference point for any future regulatory inquiry into OpenAI's safety validation processes.
Coverage we drew on
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsOpenAI · Mira Murati · Sam Altman · Elon Musk
Modelwire Editorial
This synthesis and analysis was prepared by the Modelwire editorial team. We use advanced language models to read, ground, and connect the day’s most significant AI developments, providing original strategic context that helps practitioners and leaders stay ahead of the frontier.
Modelwire summarizes, we don’t republish. The full content lives on theverge.com. If you’re a publisher and want a different summarization policy for your work, see our takedown page.