Modelwire
Subscribe

OpenAI built a networking protocol with AMD, Broadcom, Intel, Microsoft, and NVIDIA to fix AI supercomputer bottlenecks

Illustration accompanying: OpenAI built a networking protocol with AMD, Broadcom, Intel, Microsoft, and NVIDIA to fix AI supercomputer bottlenecks

OpenAI, AMD, Broadcom, Intel, Microsoft, and NVIDIA have jointly developed MRC, an open-source networking protocol designed to eliminate interconnect bottlenecks in massive GPU clusters. By enabling simultaneous data transmission across hundreds of parallel paths, MRC reduces the required switch hierarchy from three or four layers to just two, allowing over 100,000 GPUs to communicate efficiently while cutting power consumption and infrastructure costs. The protocol is already deployed on OpenAI's Stargate supercomputer, signaling a shift toward collaborative infrastructure standards as AI compute scales beyond traditional networking architectures.

Modelwire context

Analyst take

The real signal here is not the protocol itself but the coalition. AMD, Broadcom, Intel, and NVIDIA agreeing on shared infrastructure standards is unusual enough to warrant scrutiny about what each party is actually protecting: open standards at the networking layer tend to benefit whoever already dominates the compute layer above it.

The Decoder reported in early May that big tech is collectively committing $725 billion to AI infrastructure this year, and MRC reads as a direct response to the scaling wall that spending creates. When you are wiring together 100,000-plus GPUs, proprietary networking becomes a single-vendor chokepoint, and the consortium format is a way to prevent that chokepoint from forming around any one member. This also connects to the AI Business piece from May 1 on demand outpacing deployment scaffolding: MRC is precisely the kind of unglamorous plumbing fix that story was describing. The Pentagon partnerships with Nvidia and Microsoft covered the same week are relevant context too, since Stargate-class infrastructure is exactly what classified AI deployments would eventually need to scale onto.

Watch whether non-consortium hyperscalers, specifically Google and AWS, adopt MRC or publish competing specs within the next two quarters. Adoption by either would confirm this becomes a genuine open standard; silence or a counter-proposal would confirm it is primarily an OpenAI-Microsoft supply chain move dressed as neutrality.

This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.

MentionsOpenAI · AMD · Broadcom · Intel · Microsoft · NVIDIA

MW

Modelwire Editorial

This synthesis and analysis was prepared by the Modelwire editorial team. We use advanced language models to read, ground, and connect the day’s most significant AI developments, providing original strategic context that helps practitioners and leaders stay ahead of the frontier.

Modelwire summarizes, we don’t republish. The full content lives on the-decoder.com. If you’re a publisher and want a different summarization policy for your work, see our takedown page.

OpenAI built a networking protocol with AMD, Broadcom, Intel, Microsoft, and NVIDIA to fix AI supercomputer bottlenecks · Modelwire