OpenAI and other AI software startups are at the economic center of the generative AI economy. End users pay OpenAI for the use of its software and OpenAI pays the AI cloud providers to process those models. In turn, those cloud providers pay Nvidia and other semiconductor companies for their chips, particularly graphical process units (GPUs).
These cloud providers are now building so many data centers that electricity has become the biggest bottleneck in the overall AI value chain according to some sources, including the Financial Times. But the Financial Times says that this problem can be easily solved (“AI bubble trouble talk is overblown”) so there is no bubble.
The AI industry’s biggest problem
The problem with this logic is that AI’s biggest problem is not access to electricity or the construction of data centers; it is that end users are not paying prices that cover the cost of the data centers and electricity. Why?
The AI startups set prices that are much lower than their costs. This has led to both big losses for them and huge revenues for AI cloud companies such as Microsoft, Google, and Oracle who represent a big percentage of the AI bubble.
A second problem is that many investors don’t understand this problem. They look at the big revenues and profits for the AI cloud and chip suppliers and think “I need to buy those shares because this AI is so great!”
But these companies are not AI companies; their services are merely used to process AI software models. The investors think that even if those models don’t work well, AI cloud and chip providers can still make big money. And currently, they do! That’s why one estimate is that the value of AI-related companies, and there are many, has risen $35 trillion since early 2023, a number that is completely mind-boggling.
Why am I suggesting that AI software models don’t work well?
In addition to many studies that reached that conclusion (MIT, Ataassian software, and University of Chicago), there is the fact that the revenues they are being paid by end users are incredibly small compared to the revenues that are being paid to the AI cloud and chip providers. Revenues are a great measure of how much customers like a new technology. Furthermore, it is OpenAI and other AI software companies who are paying the AI cloud providers. If their revenues are small, they cannot continue to pay cloud and semiconductor providers, and the AI economy is unsustainable.
This brings us to the $64,000 question (which is becoming the $64 trillion question): How big are OpenAI’s (and other AI startup) losses and thus how suspicious should we be of the big revenues for AI cloud providers and Nvidia? This is an important question because the profits for AI cloud and semiconductor providers are only sustainable if the losses aren’t large.
Are OpenAI’s losses sustainable?
There is already evidence that OpenAI’s losses are huge. But recent evidence from Microsoft’s income statements suggest that these losses are much bigger than anyone thought. The Verge says “Microsoft receives 20% of the revenue OpenAI earns for ChatGPT and the AI startup’s API platform, but Microsoft also invoices OpenAI for inferencing services. As Microsoft runs an Azure OpenAI service that offers OpenAI’s models directly to businesses, Microsoft also pays 20% of its revenue from this business directly to OpenAI.”
This has been the first opportunity to check if OpenAI has been releasing accurate figures because OpenAI does not have an independent auditor check its books in the way that publicly traded companies such as Microsoft must do.
Ed Zitron is one person who has compared the income statements from Microsoft with the figures released by OpenAI. He says: “according to the documents, Microsoft received $493.8 million in revenue share payments in CY2024 from OpenAI — implying revenues for CY2024 of at least $2.469 billion [for OpenAI], or around $1.23 billion less than the $3.7 billion that has been previously reported.”
Similarly, for 1H of CY2025, Microsoft received $454 million as part of its revenue share agreement, implying OpenAI’s revenues for that six-month period were at least $2.3 billion, or around $2 billion less than the $4.3 billion previously reported. Through September, Microsoft’s revenue share payments totaled $865 million, implying OpenAI’s revenues are at least $4.3 billion. According to Sam Altman, OpenAI’s revenue is “well more” than $13 billion. I am not sure how to reconcile that statement with the documents I have viewed.
We can argue about the true numbers. But the point is that nobody should be forced to go through these mental gymnastics concerning “the most important technology of the century,” at least according to the proponents of generative AI.
Even the Wall Street Journal said on November 12, “Microsoft’s Dealings With OpenAI Still Need a Lot More Sunlight. Company’s new disclosures only underscore how much remains hidden”.
I am happy that the WSJ is finally realizing that there is something fishy about the AI economy but a more recent article on AI suggests that it still doesn’t understand.
The article tells us that “much of the AI-related profits [for big tech companies] come from being a supplier to, or investor in, the AI startups whose losses are much bigger than those of the big tech companies. And the AI startups are “losing money as fast as they can raise it, and plan to keep on doing so for years.”
It also says:
A hefty part of the spending that generated OpenAI’s loss goes to highly paid engineers, but a lot goes into renting Nvidia chips from Microsoft’s cloud service—with much more to come. OpenAI committed to spend $250 billion more on Microsoft’s cloud and has signed a $300 billion deal with Oracle, $22 billion with CoreWeave and $38 billion with Amazon, which is a big investor in rival Anthropic.
But then we are hit with a zinger:
It might all work out. The chatbots are near-magical experiences until they make a basic error like thinking 5.11 is bigger than 5.9, a problem even the latest versions still suffer from sometimes. Fix these, fix the gaping security holes and stop them “hallucinating,” or making up their own facts, and many more businesses and individuals will be willing to pay. New products based on the same underlying technology could become ubiquitous, and eventually transform society.
Yeah right. “‘Hallucinations’ are an inherent characteristic, not a temporary bug, of current generative AI models, stemming from their fundamental design to predict statistically plausible outputs rather than verify facts.” Or at least that’s what Google’s AI says. We should try to reduce the frequency of hallucinations, but it will be hard.
In the meantime, let’s require AI software startups to release audited numbers so that we can at least understand the current state of the AI economy.
