CoreWeave will provide Meta with large-scale AI cloud computing capacity from multiple data centers, some of which are supported by NVIDIA’s newly launched Rubin architecture AI computing clusters.
According to Zhitong Finance APP, CoreWeave (CRWV.US), a cloud-based AI computing power leasing giant known as “NVIDIA’s favored child,” has expanded its latest agreement with Meta Platforms (META.US), the parent company of Facebook, to supply AI computing infrastructure up to $21 billion. This move builds upon the $14.2 billion cloud computing agreement reached in September between the two companies. CoreWeave will provide computing capacity until 2032, further deepening its business relationship with the social media giant, which is striving to catch up with leading AI developers such as Anthropic and OpenAI in the large-scale AI model competition.
For NVIDIA’s newly launched flagship AI computing infrastructure platform, Vera Rubin, Meta’s additional investment in computing power signifies advancing Vera Rubin from a technical roadmap to a system-level AI computing infrastructure with real customers, long-term orders, and commercial deployment. For the narrative of an “AI-driven bull market” that has, in recent years, been credited with propping up global stock markets almost single-handedly, this move significantly reinforces the grand narrative logic. It demonstrates that when the parameter scale of large AI models, reasoning pathways, and multi-modal/agent-based Agentic AI (i.e., AI agents) workloads drive exponential increases in computing power consumption, tech giants’ capital expenditure remains heavily tilted towards AI computing infrastructure.
According to a statement released by CoreWeave on Thursday, under the new terms, Meta, Facebook’s parent company, has committed an additional $21 billion to acquire AI cloud computing capacity from the computing power leasing giant CoreWeave. CoreWeave will provide substantial AI cloud computing infrastructure capacity through multiple large AI data centers by December 2032. Some of these data centers will be supported by NVIDIA’s newly launched Vera Rubin AI computing infrastructure platform.
Additionally, the previous agreement between the two companies was originally set to last until December 2031, with an option to extend to 2032 if additional capacity was added. Therefore, the latest approximately $21 billion initial commitment for AI computing infrastructure consists of two parts: one part corresponds to new orders for new AI cloud computing capacity, while the other involves executing the capacity expansion options from the earlier agreement.
Following the announcement of the latest developments, CoreWeave’s share price surged over 8% during pre-market trading on Thursday. The stock has risen sharply by 24% year-to-date, outperforming both the S&P 500 Index and the Nasdaq 100 Index. Meanwhile, Meta’s shares gained approximately 2% in pre-market trading.
“NVIDIA’s favored child” CoreWeave secures another mega-scale computing order
CoreWeave, known as “NVIDIA’s favored child,” is part of the emerging group of “new cloud service providers.” These enterprises operate by leasing access to leading cloud-based AI computing infrastructure powered by NVIDIA AI GPUs. Its core competitors include NEBIUS Group NV and Nscale, all of which belong to the so-called “neoclouds” category of cloud-based AI computing rental services. CoreWeave has consistently been one of the main beneficiaries of the AI computing supply chain amid the race among major technology companies to build the most advanced large AI models, a competition that has driven a surge in computing demand.
Undoubtedly, Meta has become one of the highest spenders in the field of AI computing infrastructure. Mark Zuckerberg, the CEO of this tech giant, plans to invest hundreds of billions of dollars in the coming years to construct, train, and run large AI models, requiring massive energy resources, computing infrastructure, and top-tier global talent.
CoreWeave also separately announced that it plans to issue $3 billion in convertible senior notes due in 2032 and $1.25 billion in senior notes due in 2031 for general corporate purposes, including repayment of outstanding debt. According to media reports citing insider sources, in February this year, the company was seeking to raise approximately $8.5 billion from several large investment banks, including Morgan Stanley and Mitsubishi UFJ Financial Group, to help finance its cloud computing capacity expansion for Meta.
As one of the earliest adopters of NVIDIA graphics processing units (GPUs) for cloud-based leasing in the data center sector, CoreWeave gained a first-mover advantage in meeting the surging demand for AI computing resources in data centers. This earned it the favor of NVIDIA’s venture capital arm, allowing it to secure highly sought-after NVIDIA H100/H200 and Blackwell series AI GPUs multiple times ahead of others. This even forced cloud service giants like Microsoft to lease cloud-based AI computing resources from CoreWeave, earning it the nickname “NVIDIA’s favored child.”
The current global demand for AI computing power resources is undoubtedly continuing to exhibit explosive growth. This is why valuations of cloud-based AI computing power leasing leaders such as Fluidstack, Nebius Group NV, Nscale, and CoreWeave have been continuously expanding this year. The demand for AI computing power resources closely related to AI training and inference has pushed the capacity that underlying computing infrastructure clusters can satisfy to its limits, and even the recently expanded large-scale AI data centers cannot meet the incredibly robust global demand for computing power.
Looking at the timeline over the past year, nearly all of CoreWeave’s publicly disclosed large new cloud orders have come from top-tier generative AI buyers. In March 2025, it secured a training and inference computing contract with OpenAI worth up to $11.9 billion over five years; in May, it received an additional order expansion from OpenAI worth up to $4 billion; in September, it obtained another supplementary agreement from OpenAI worth up to $6.5 billion, bringing the total contract value between the two parties to approximately $22.4 billion; in the same month, it signed a long-term computing agreement worth $14.2 billion with Meta; and this time, it secured another new or expanded agreement with Meta worth approximately $21 billion.
The ‘AI bull market narrative’ is bound to掀起巨浪 once again.
For the ‘AI bull market narrative,’ which has underpinned the global stock market’s bullish trend in recent years, the most significant aspect of this transaction lies in reinforcing the notion of ‘long-term contracted cloud AI computing power expansion driven by massive AI inference workloads.’ It highlights that the demand for AI computing power has not peaked as skeptics of the ‘AI bubble narrative’ have suggested but is instead transitioning fully from training peaks to long-term computing power expansion for contextual inference, Agentic AI, and production-grade deployment. Against the backdrop of significantly eased geopolitical tensions, the ‘AI bull market narrative’ is bound to掀起巨浪 once again.
Additionally, Meta Platforms securing early deployment of the Rubin architecture through CoreWeave essentially confirms a major trend for the market: the next wave of AI capital expenditure will not only involve continued purchases of the Blackwell architecture but also begin allocating substantial budgets for NVIDIA’s newly launched Rubin generation of ‘rack-level/factory-level’ AI computing power infrastructure systems. This represents a very strong demand anchor for NVIDIA itself, as well as for the entire AI computing power supply chain, including switching chips, high-performance networking, liquid cooling, OCS switches and optical interconnects, optical modules/silicon photonics circuits, HBM/storage, 2.5D/3D advanced packaging, and data center power chains.
At the GTC conference in March, NVIDIA CEO Jensen Huang unveiled an ‘unprecedented AI computing revenue super blueprint’ in the field of AI computing infrastructure. He informed global investors that, driven by the robust demand for Blackwell architecture GPU computing power and the even more explosively strong demand for the upcoming mass production of the Vera Rubin architecture AI computing system, NVIDIA’s future revenue scale in the AI chip sector could reach at least $1 trillion by 2027 (2025-2027), far exceeding the $500 billion AI computing infrastructure blueprint presented at the previous GTC conference for 2026.
As model sizes, inference pathways, and multimodal/agent-based Agentic AI workloads drive exponential expansion in computing resource consumption, technology giants are increasingly focusing their capital expenditures on AI computing infrastructure amid surging AI computing demand. Global investors will continue to anchor the ‘AI bull market narrative’ around NVIDIA, Google TPU clusters, and AMD’s new product iterations and AI computing cluster delivery expectations as one of the most certain growth investment narratives in the global stock market. This also means that investment themes closely related to AI training and inference—such as electricity, liquid cooling systems, and optical interconnect supply chains—will remain among the hottest investment sectors in the stock market, following NVIDIA, AMD, Broadcom, Taiwan Semiconductor, Micron, and other AI computing leaders, even as geopolitical uncertainties in the Middle East persist.
Based on the latest analyst forecasts compiled by institutions, Amazon, along with Alphabet, the parent company of Google, Meta Platforms Inc., the parent company of Facebook, Oracle Corporation, and Microsoft, is expected to cumulatively spend approximately $650 billion on AI-related capital expenditures by 2026. Some analysts believe that total spending could exceed $700 billion—indicating a year-on-year increase in AI capital expenditure of more than 70%. Notably, these five major U.S. super-tech giants are projected to cumulatively invest around $1.5 trillion in building an immense AI computing infrastructure from 2023 to 2026. In contrast, these tech giants cumulatively invested about $600 billion during the entire historical period prior to 2022.
Leave a comment