OpenAI kills its dedicated coding model Codex again, folding it into GPT-5.5

OpenAI has discontinued Codex as a standalone product, merging its capabilities into GPT-5.5. The consolidated model claims improved agentic coding performance and reduced token consumption, signaling a shift toward unified rather than specialized model architectures.
Modelwire context
Analyst takeThis is the second time OpenAI has killed a product named Codex (the original API was deprecated in 2023), which raises a real question about whether the 'improved agentic coding performance' claim is a genuine capability advance or a reframing of a product that never found durable standalone traction.
We have no prior Modelwire coverage to anchor this to directly, so context has to come from the broader market. The consolidation fits a pattern visible across the frontier lab space: as base models grow more capable, the business case for maintaining separate fine-tuned variants weakens, because the maintenance overhead and the user confusion both compound. For enterprise customers who integrated Codex-specific endpoints, this is a forced migration with real switching costs, and the 'reduced token consumption' framing is doing a lot of work to soften that. The competitive read is that OpenAI is signaling confidence that GPT-5.5 can hold its own against Anthropic's Claude 3.7 Sonnet and Google's Gemini on coding benchmarks without a dedicated product line.
Watch whether independent coding benchmark results for GPT-5.5 (SWE-bench Verified, in particular) match OpenAI's consolidation claims within the next 60 days. If third-party scores show regression versus the last published Codex numbers, the 'improved performance' framing collapses.
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsOpenAI · Codex · GPT-5.5
Modelwire summarizes — we don’t republish. The full article lives on the-decoder.com. If you’re a publisher and want a different summarization policy for your work, see our takedown page.