Oracle (ORCL.US) has clarified reports regarding the progress of its data center construction for key client OpenAI, denying any delays in project delivery. Earlier reports suggested that due to labor and material shortages, the data center would be completed in 2028 instead of the originally planned 2027.
Sources cited on Friday claimed that Oracle's data center project for OpenAI faced shortages in workers and materials, leading to postponed timelines. However, an Oracle spokesperson stated that site selection and delivery schedules were closely coordinated and jointly confirmed with OpenAI after the agreement was signed. "There have been no delays at any of the sites required to fulfill contractual obligations, and all milestones remain on track," the spokesperson added. Oracle did not specify when the cloud computing infrastructure for OpenAI would officially launch.
In September, OpenAI disclosed a five-year partnership with Oracle worth over $300 billion. Clay Magouyrk, one of Oracle's newly appointed co-CEOs, also affirmed during an October analyst meeting that the company maintains a "strong working relationship" with OpenAI.
For Oracle, a 48-year-old company historically reliant on database software and enterprise applications, the OpenAI collaboration represents a relatively new business direction. While its cloud infrastructure segment has expanded rapidly in recent years—now contributing over a quarter of revenue—it still trails major cloud providers like Amazon, Microsoft, and Google in scale.
OpenAI is pursuing multiple partnerships to meet future computing demands. In September, Nvidia (NVDA.US) announced a letter of intent to deploy at least 10 gigawatts of its equipment for OpenAI, with the first phase expected to launch in late 2026. Both parties expressed hopes to finalize strategic cooperation details within weeks, though no official updates have followed. Nvidia cautioned in an November filing that "there can be no assurance" of reaching a definitive agreement with OpenAI.
OpenAI heavily relies on Nvidia GPUs for products like ChatGPT and is also exploring custom chips through a collaboration with Broadcom (AVGO.US). Broadcom CEO Hock Tan outlined a tentative 2027–2029 timeframe for the project during Thursday's earnings call, estimating a total scale of 10 gigawatts. "This aligns with OpenAI’s direction as a highly respected and valuable partner, but we don’t expect significant contributions until 2026," Tan noted.
Comments