The Sequence Opinion #738: Breaking CUDA’s Spell: Can AMD Build a Second Ecosystem for AI?
The OpenAI-AMD partnership has highlighted some potential vectors for competing with NVIDIA.
OpenAI’s multi‑year partnership with AMD to deploy large fleets of Instinct accelerators is more than a procurement decision—it’s a strategic signal. For the first time in the current AI cycle, a top‑tier model lab is committing to build at hyperscale on non‑NVIDIA silicon. The motivation is straightforward: insatiable demand for compute, concentration risk from relying on one supplier, and the desire to shape future accelerator design in a tighter co‑development loop. For AMD, the endorsement validates its GPU roadmap and software stack, and it opens the door to tens of billions in potential revenue. For OpenAI, it diversifies supply, improves negotiating leverage, and ensures headroom to scale.
This moment doesn’t imply an immediate dethroning of NVIDIA. Rather, it acknowledges a bimodal reality: NVIDIA remains the default incumbent with overwhelming market share and an unmatched software moat, while AMD steps onto the field as a credible second source at meaningful scale. In a world where demand outstrips supply, the fastest way to build more AI is to add another vendor whose hardware and software are “good enough” for state‑of‑the‑art training and inference. The AMD deal marks a turning point because it lowers the perceived switching cost for others: if OpenAI can do this at scale, so can hyperscalers, national labs, and large enterprises.
The remainder of this essay evaluates AMD’s chances of competing with NVIDIA across the dimensions that matter in practice: hardware capabilities, software ecosystem maturity, adoption lock‑in, and where AMD has a realistic path to win. The lens is purely AI—data‑center training and inference—leaving gaming and consumer GPUs aside.