Sep 17, 2025
Oracle’s Blowout AI Results: The Birth of a Cloud Player That Could Overtake Amazon? The GPU Investment Cycle Enters Bubble Territory
Ryunsu Sung

- Investors Cheer a 359% Surge in RPO
- RPO: What Exactly Are Remaining Performance Obligations?
- Who Are Oracle’s Customers?
- OpenAI Accounts for $300 Billion
- Tracing AI Capex Through Alphabet’s Financial Statements
- Oracle Goes All-In on AI — With Nvidia’s Support
- The hidden meaning behind Microsoft’s CEO saying, “I’m good for our $80 billion”
- GPU rental economics through long‑term contract pricing—and the eventual bursting of the bubble
Investors Cheer a 359% Surge in RPO
Oracle, the U.S. database and cloud computing provider, announced in its fiscal Q1 2026 earnings release on the 9th (U.S. time) that its Remaining Performance Obligations (RPO) had surged 359% year over year to $455 billion. Once this was disclosed, ORCL shares spiked as much as 43% in after-hours and next-day intraday trading.
RPO: What Exactly Are Remaining Performance Obligations?
Under U.S. GAAP, RPO is a forward-looking revenue metric that public companies are required to disclose. Put simply, you can think of it as the company’s “order backlog.” It represents the total amount of future, contractually committed revenue that Oracle’s customers have agreed to pay. The company guided that the vast majority of its disclosed RPO is expected to be realized within the next five years.
Considering that Oracle’s RPO in the same quarter a year earlier (CY Q3 2024, which corresponds to fiscal Q1 2025 for Oracle) was already a massive $99.1 billion, the fact that this figure has more than quadrupled in just one year underscores the unprecedented growth rate. Unsurprisingly, the market reacted positively, and AI hardware names such as Nvidia and SK Hynix also benefited.
Who Are Oracle’s Customers?
When I first saw in Oracle’s presentation that its RPO exceeded 455 billion dollars in value, my initial suspicion was that the existing hyperscalers (AWS, Microsoft Azure, Google Cloud Platform) might be engaging in financial engineering — effectively re-leasing a portion of their AI data center capacity from Oracle’s facilities to make their own cash flow statements look better, given the heavy cash burden of AI data center investments.
In practice, both Microsoft and Google Cloud have been signing pre-purchase agreements with CoreWeave (CRWV), a GPU-focused data center operator, to expand the AI compute capacity they can offer customers while reducing the massive cash outlays required for data center capex. That said, Google and Microsoft are still primarily meeting the growing AI compute demand through their own data centers, and, at least for now, there is no visible, disclosed partnership with Oracle.
OpenAI Accounts for $300 Billion
A closer look shows that OpenAI is the customer responsible for almost all of the increase in RPO. OpenAI CEO Sam Altman has repeatedly complained in recent years that supply is woefully inadequate relative to the company’s exponentially growing AI compute needs.
OpenAI’s annual revenue this year is projected at $13 billion, more than triple last year’s $4 billion. Against that backdrop, Oracle has signed a deal to provide OpenAI with an average of $60 billion per year in computing capacity for five years starting in 2027. Because the bulk of OpenAI’s costs today are tied to compute resources, the company is effectively betting that its revenue will grow at least fivefold by the year after next, and has signed this contract on that assumption. If its current explosive growth pace continues, that target is achievable, but given the sheer scale involved, Oracle is clearly taking a substantial risk.
Tracing AI Capex Through Alphabet’s Financial Statements
Alphabet (GOOG), the parent company of Google Cloud, provides a clear window into surging AI data center costs through its 10-Q quarterly filings. Just between Q1 and Q2 of this year, based on changes in purchase commitments, data center lease obligations, and CAPEX, AI-related investment appears to have increased by roughly 23%, which annualizes to about 128%.
Looking at the fifth column, Total cash flow, you can see that net cash flow of $20 billion in Q1 shrank sharply to $3.2 billion in Q2. GPU servers are typically depreciated over six years, which means that if an AI data center spends 60 billion dollars in cash on CAPEX this year, only 10 billion a year will show up as an expense on the income statement.
Oracle Goes All-In on AI — With Nvidia’s Support
Up to now, the three leading hyperscale cloud providers — Alphabet, Microsoft, and Amazon — have all funded their AI data center CAPEX primarily out of internal cash flow. In other words, they have been using the robust cash generation from their core advertising and software businesses to subsidize the highly capital-intensive expansion into cloud computing. Given their strong cash flow and credit profiles, they could have tapped cheap debt to accelerate expansion even further, but instead chose a more conservative approach of investing only what they earn. This has been feasible in part because of the rise of so-called “neo-cloud” players like CoreWeave, which offer GPU-only cloud computing services. These firms have taken on higher interest costs to rapidly expand data center capacity in response to explosive demand for compute, while partially hedging their risk through long-term purchase agreements. Nvidia, for its part, tends to prioritize CoreWeave and Oracle over Alphabet, Microsoft, and Amazon in GPU allocation, because the incumbent hyperscalers are all trying to develop their own chips to reduce dependence on Nvidia GPUs, and Nvidia is pushing back. Oracle founder Larry Ellison and Nvidia CEO Jensen Huang have also been close for many years. Nvidia can afford this kind of favoritism because its GPUs still enjoy a commanding performance lead over rival products and supply remains constrained relative to demand.
Oracle’s approach to AI capex is markedly different from that of the big three: it is taking a highly aggressive stance. Historically a software company focused on databases and ERP systems, Oracle’s CAPEX line item jumped from $7.8 billion in fiscal Q1 2025 to $27.4 billion just one year later — a fourfold increase. As a result, its quarterly free cash flow swung from a positive $11.2 billion to a negative $5.8 billion, and to honor its contracted commitments, its future CAPEX will have to grow several times from current levels. Oracle ended the quarter with just over $10 billion in cash and cash equivalents, which strongly suggests it will soon tap the corporate bond market more aggressively. Because Oracle’s credit quality is significantly higher than that of most neo-cloud players, it can layer its own credit on top of GPU and data center collateral to raise capital at a lower cost, thereby boosting returns on invested capital. In addition, Oracle has a technical edge in networking, which gives it a cost advantage over rivals in scenarios where multiple data centers must be interconnected to train AI models — a point that chairman and CTO Larry Ellison has been keen to emphasize to investors.
The hidden meaning behind Microsoft’s CEO saying, “I’m good for our $80 billion”
When the $500 billion Stargate project—an initiative to invest in AI data centers—was announced early this year under the leadership of President Trump and Sam Altman, Elon Musk, who has a poor relationship with Altman, publicly pushed back, saying, “They don’t have the cash to invest $100 billion a year.” When Microsoft CEO Satya Nadella, whose company is a partner in Stargate, was asked about this, he sidestepped a direct rebuttal of Musk and instead reaffirmed that Microsoft plans to spend a total of $80 billion this year on data center CAPEX, replying, “Hey, I’m good for our $80 billion.”
In its earnings release at the end of July, Microsoft said it plans to spend $30 billion in capex in the following quarter. The CNBC reporter who interviewed Satya Nadella in January poked at this and commented, “Now he seems to think $120 billion is enough.” Microsoft’s investment plan, which had already exceeded 80 billion dollars, increased another 50% in just six months, and this happened as the company partially relaxed the condition it had attached to its massive investment in OpenAI—that OpenAI use only Microsoft’s cloud services. Ultimately, this means that even if Microsoft invests $120 billion a year, it still cannot fully keep up with the explosive growth in demand for GPU computing.
Amazon, Microsoft, and Google now stand at a crossroads: they must decide whether to finally remove the self‑imposed constraint of “investing only within operating cash flow” and go all‑in on AI data centers, or risk ceding a substantial share of the future cloud market to Oracle. If even one of these companies chooses the latter, the odds rise that the others will follow—and the market will definitively enter the “bubble” cycle everyone worries about.
GPU rental economics through long‑term contract pricing—and the eventual bursting of the bubble
Until 2020, the cloud businesses of Amazon, Microsoft, and Google assumed a useful life of three years for the equipment installed in their data centers. In other words, when they bought server CPUs or networking gear, they assumed that after three years those assets would have zero remaining economic value. Citing several factors—such as slower performance gains due to the rising complexity of semiconductor process technology and improved hardware durability—they extended the useful life first to four years, then to six. Extending an asset’s useful life from three to six years doesn’t change cash flows, but it cuts annual depreciation expense in half. That boosts reported margins and makes the business look more profitable. But useful life is, at its core, an assumption based on a “reasonable forecast” of the future. No one can accurately predict what the world will look like six years from now.
Assuming a six‑year depreciation schedule for GPUs, Oracle’s cloud contract with OpenAI is expected to generate an EBIT margin of around 40%. That’s roughly in line with Oracle’s overall margin today, and it can be seen as an opportunity to grow revenue exponentially without diluting profitability. The problem is that the long‑term contract with OpenAI is unlikely to actually run for six years. The average GPU contract term for “neo‑cloud” providers is about four years, and there is no guarantee that today’s pricing will hold once those contracts expire. Even if we assume that OpenAI—Oracle’s largest customer—grows its revenue to the point where it can spend $60 billion a year on computing, new, more efficient Nvidia GPUs will almost certainly be on the market four years from now, and the current generation will likely have to be repriced lower. Nvidia’s H100, launched three years ago, is still heavily used, but it is a near certainty that its rental cost will decline further over the next three years.
According to Silicon Data’s H100 Rental Index in the chart, the cost of renting a single H100 in June this year was about 23% lower than in September 2024. Rather than a crash, this is better understood as a “normalization” after an extreme shortage, where GPU supply was far short of demand. Considering that TSMC’s output will also increase, the GPU shortage will eventually ease. That’s when the real war begins. Having already poured astronomical sums into building data centers, companies will try to squeeze out every last dollar by cutting hourly rental prices in a race to the bottom—and that will inevitably put downward pressure on the long‑term contract prices that have so far justified massive AI CAPEX.
As investors, what we must decide now is whether to participate in this AI capex bubble, how much of it to enjoy if we do, and who will be left holding the bag when the music stops.
Newsletter
Be the first to get news about original content, newsletters, and special events.
- Investors Cheer a 359% Surge in RPO
- RPO: What Exactly Are Remaining Performance Obligations?
- Who Are Oracle’s Customers?
- OpenAI Accounts for $300 Billion
- Tracing AI Capex Through Alphabet’s Financial Statements
- Oracle Goes All-In on AI — With Nvidia’s Support
- The hidden meaning behind Microsoft’s CEO saying, “I’m good for our $80 billion”
- GPU rental economics through long‑term contract pricing—and the eventual bursting of the bubble



