Note By Ryunsu

Jul 08, 2024

AI’s 700 Trillion-Won Problem

Ryunsu Sung avatar

Ryunsu Sung

AI’s 700 Trillion-Won Problem 썸네일 이미지

David Cahn, a partner at Sequoia Capital, the world’s largest VC, emphasized the above point in his piece titled “AI’s $600 Billion Question.” Although the article was published on June 20, it has only started to gain more attention on social media over the past few days.

In today’s column, based on David Cahn’s original essay, I will interpret his framework and point out its flaws. My view is that his concerns are valid, but the way he presents them is highly misleading.

AI-related revenue required to recoup investment – Sequoia Capital
AI-related revenue required to recoup investment – Sequoia Capital

“Note: this metric is easy to compute directly. Just take Nvidia’s annual revenue forecast and multiply by 2 to account for the total cost of ownership (TCO) of AI data centers (GPUs are half of the TCO, and the other half includes energy, buildings, backup generators, etc.). Then multiply by 2 again to reflect a 50% gross margin for the end users of the GPUs (e.g., startups or businesses buying AI compute from Azure, AWS, or GCP that need to generate revenue).”


For example, if you buy $1 billion worth of Nvidia GPUs, the cloud provider’s TCO would be $2 billion. Applying a 50% software margin on top of that, you would need to generate $4 billion in revenue to justify the $1 billion investment.

This metric was likely designed to explain GPU economics to as many people as possible in the simplest way, but it does not accurately reflect reality.

First, there is no discussion of the time value of money. Like most IT hardware, GPUs typically have a useful life of four years. The concept of TCO likewise refers to the total cost of ownership over that four-year period. If we apply the author’s logic literally, it implies that over the next four years, revenue must reach four times the GPU purchase amount in order to fully recoup the investment. As of Q4 2024, even assuming conservatively that annual recurring revenue from AI software is only one-quarter of the four-year total, $150 billion would suffice.

I do not believe Q4 annualized AI software revenue needs to be as high as $150 billion, because AI software revenue is following a J-curve.

Back to the original text:


What has changed since September 2023?

The supply shortage has eased: Late 2023 was the peak of the GPU supply crunch. Startups were calling their VCs and anyone they could talk to, asking for help securing GPUs. Today, those concerns have almost completely dissipated. Most people I speak with now say they can obtain GPUs relatively easily with reasonable lead times.

GPU stockpiles are building up: Nvidia reported that roughly half of its Q4 data center revenue came from the major cloud providers. Microsoft alone is estimated to account for about 22% of Nvidia’s Q4 revenue. Capital expenditures are reaching historic levels. These investments were a central theme in Big Tech’s Q1 2024 earnings, and CEOs have communicated this clearly to the market: “We are going to invest in GPUs whether you like it or not.” Hardware stockpiling is nothing new, and once inventories become large enough and demand slows, it will trigger a pullback in orders.

OpenAI still dominates AI revenue: The Information recently reported that OpenAI’s revenue has grown from $1.6 billion at the end of 2023 to $3.4 billion today. While we have seen a handful of startups scale their revenue to the $100 million range, the gap between OpenAI and other startups remains vast. Setting ChatGPT aside, how many AI products are consumers actually using today? Think about how much value you get from a $15.49 monthly Netflix subscription or $11.99 for Spotify. Over the long term, AI companies will have to deliver substantial value if they expect consumers to keep paying.

The $125 billion hole has now become a $500 billion hole: In my previous analysis, I generously assumed that Google, Microsoft, Apple, and Meta could each generate $10 billion a year in new AI-related revenue. I also assumed $5 billion each in new AI revenue for Oracle, ByteDance, Alibaba, Tencent, X, and Tesla. Even if those assumptions hold and we add a few more companies to the list, the $125 billion hole has now become a $500 billion hole.

The B100 is coming: Earlier this year, Nvidia announced its B100 chip, which delivers 2.5x better performance at 25% lower cost. This is expected to ultimately drive another surge in demand for NVDA chips. Because the B100 offers such a dramatic improvement in price-performance over the H100, there is a high likelihood of yet another supply crunch later this year as everyone rushes to get their hands on B100s.”


David Cahn argues that the original $125 billion gap (roughly 17.5 trillion won) has now widened to a $500 billion gap (around 70 trillion won). He adds that this is based on the generous assumption that Google, Microsoft, Apple, and Meta can each generate $10 billion in annual AI revenue. Even if the major Big Tech firms and AI startups together produce $100 billion a year in AI-related revenue, there would still be a $500 billion shortfall.

However, as mentioned earlier, the author did not factor in the time value. If companies around the world are already generating $100 billion in annual recurring revenue from AI, then over the next four years they will generate at least $400 billion in revenue, which is already larger than the $300 billion TCO the author calculated. At the very least, it means they won’t be losing money.

Is revenue really what matters?

Improving targeting for YouTube video ads, optimizing delivery routes for Amazon trucks, automating customer service, and detecting fraud are all ways to generate meaningful returns on AI investments without ever showing up as “AI revenue.” We should therefore acknowledge that the idea that AI hardware investments must, by definition, be accompanied by matching revenue is an extremely conservative way of thinking.

The easiest way to challenge the original argument is to look at the assumption around productivity gains. If the adoption of AI could make 10% of jobs in developed countries 10% more efficient, global GDP would rise by $600 billion. That figure lines up exactly with what Sequoia Capital’s David Cahn is asking for.

AI models are already being actively used in many legal and knowledge-work fields, but the resulting productivity gains are not fully captured in the statistics. For example, if you automate a large portion of your work using AI and never tell your manager, that productivity gain simply shows up as more leisure time. Your extra leisure time will be spent on goods and services created by others, which is a net positive for the economy. In the near future, companies that track and harness these efficiency gains among their employees will see higher productivity than those that don’t, and competitive pressure will eventually push every company to use AI to boost productivity and cut operating costs.

As long as there is this kind of broad-based productivity uplift, massive spending on AI GPUs remains rational even if it does not immediately translate into additional revenue.

Comments0

Newsletter

Be the first to get news about original content, newsletters, and special events.

Continue reading