Key Takeaway
OpenAI and Broadcom plan to launch custom AI accelerators in late 2026, with full deployment by 2029. These accelerators will provide 10GW of computing power, enough to power around 7 million homes in the US. Designed for the mathematical operations essential to AI models, the chips will be distributed across OpenAI’s facilities and partner data centers globally. This initiative reflects a broader trend in the AI industry, where companies are creating their own chips to enhance performance and reduce reliance on traditional suppliers. OpenAI’s ChatGPT and enterprise products currently serve over 800 million users weekly, driving the need for increased computing resources.
The Scope of Custom AI Accelerators
The initial systems from the OpenAI and Broadcom partnership are anticipated in the latter half of 2026, with a complete rollout expected by the end of 2029.
The 10GW figure signifies a considerable amount of computing power.
For perspective, 1GW can provide enough energy to power roughly 700,000 homes in the US.
These accelerators, which are specialized processors tailored to perform the mathematical operations essential for AI models, will be deployed across OpenAI’s facilities and the data centers of its global partners.
This initiative comes as numerous companies in the AI sector consider designing their own chips to boost performance and lessen reliance on traditional suppliers.
For OpenAI, whose ChatGPT and enterprise products currently serve over 800 million users weekly, the demand for computing resources is immense.
Creating custom chips is seen as a crucial part of the strategy to fulfill this demand.



