Generative AI, or AI for short, is the most controversial technology of the 21st century. Some believe it will lead to rapid increases in productivity, others say the opposite, believing it represents an existential risk to humanity, while still others believe it is the biggest financial bubble of all time (one estimate is $35 trillion).
I belong to the third category and have written many articles and a book (Unicorns, Hype and Bubbles) about this claim, often with Gary Smith. I also admit that I was mystified by why more people don’t agree with me until I read Steven Pinker’s 2025 book When Everyone Knows That Everyone Knows . . .: Common Knowledge and the Mysteries of Money, Power, and Everyday Life.
Knowledge is common when almost everyone knows it and moreover, they know that everyone knows it. Common knowledge is essential for people to coordinate for mutual gain. Simple examples include language, geography, religion, science, humor, money and viral advertisements. How can a language provide value unless people know the same words?
Common knowledge and the AI bubble
How is common knowledge relevant to an AI bubble? People often invest in companies not because they think the companies will succeed, but because they think others will invest. Thus, they try to invest before the others do, hoping there will be a greater fool who will pay more than they did.
Nobel Laureate Robert Shiller says that every bubble involves a narrative that wins over investors. Successful narratives combine different forms of common knowledge, be they concepts, beliefs, or legends, that sound so convincing that investors will beat a path to their doors.
My skepticism about AI largely revolves around the huge losses and low revenues of AI software startups such as OpenAI and Anthropic, which would have been the death of them 40 years ago because profits were then the key signals for investment, a form of common knowledge.
This also means that the AI bubble would have long burst because those two startups are at the heart of the AI value chain. Individuals and organizations buy AI services from AI startups and those startups pay AI cloud centers, also known as data centers, for the processing of those models. In turn, cloud centers purchase chips from Nvidia and other semiconductor suppliers who are currently highly profitable and valuable.
A key aspect of this bubble is that the AI software companies have set prices much lower than their costs and thus users aren’t paying the full costs, which is why the AI software startups have huge losses.
Profits and revenues are no longer important
The bubble hasn’t burst because the importance of profits is no longer part of common knowledge. For instance, the falling emphasis on profits can be seen in the declining percentage of profitable startups at IPO time from 80% in 1980s to less than 20% in the 2010s.
Furthermore, startups founded since 2005 have taken longer to become profitable than those founded before 2005. For instance, more than 85% of publicly traded ex-Unicorn startups were still unprofitable in 2024, even though they were founded ten years earlier (Unicorns are startups that are valued at more than $1 billion before doing an IPO).
One reason the importance of profits is no longer part of ‘common knowledge’ is that startups such as Amazon have succeeded big time despite initially having big losses. What promoters of this argument don’t say is that Amazon had $3 billion in cumulative losses when it became profitable in its tenth year (2004) while OpenAI lost $11.5 billion just in the third quarter of 2025 (based on analyses of Microsoft’s third quarter earnings in 2025). This figure suggests OpenAI will have bigger cumulative losses than its projected $115 billion when it purportedly becomes profitable in 2029.
For most of the 2010s and the early 2020s, revenue growth has been the key metric for startups, at least until mid-2024 when some investment banks noticed that revenues were insufficient to justify the shares of the AI-related companies. Sequoia’s David Cahn said in that $600 billion in annual generative AI revenues (which assumed 50% margins) was needed to justify the current investments, a figure that has undoubtedly increased since then.
Number of Users is More Important
This required the tech bros to increasingly emphasize an old metric, the number of users, with a new flavor. While the “number of users” once meant “number of paying users,” the success of Facebook, Instagram, and other social media sites had convinced people that the word “paying” could be dropped. Not surprisingly, generative AI is the fastest growing technology of all time in terms of this metric.
Behind this metric is a belief that money losing startups can always figure out how to monetize users later, with for instance advertising. They never mention that generative AI services cost much more to deliver than do social media services and thus ads probably won’t lead to profits.
Happy users have also created their own form of logic to support generative AI. Because they love it, they believe that everyone else will end up loving it. They like to claim that AI has revolutionized their work in building websites, making software apps, or writing emails. Thus, other people will become users, and those users will become paying users because AI is so good.
Nevertheless, when someone says they like AI, I believe them. AI probably does help them do their jobs, even if it gives downstream workers AI slop. But even if we ignore that problem, most users cannot effectively extrapolate from their job to the economy because all jobs are different and few if any people understand all jobs, which is why common knowledge about revenues existed. Spending money on a product is like expressing your approval.
AGI is more important
The final aspect of why an AI bubble isn’t common knowledge now involves the imminence of artificial general intelligence or AGI, something that has been believed by many for years if not decades. Some of this optimism disappeared after the disappointing performance of GPT-5, which was not considered better than GPT-4 by most users (here and here).
Currently, the most visible improvements are reductions in the cost of tokens in cloud centers, not the AI itself. Thus, fewer hallucinations (some say they are increasing) or better business outcomes are ignored and most outcomes don’t involve good returns anyways. Instead, many AI optimists focus on the increasing number of tasks that an AI can complete at 50% or 80% accuracy even though those percentages are far too low for most corporate work processes.
Other AI optimists focus on intelligence metrics despite common knowledge among academics that measuring it is difficult, and exams in which students regurgitate information are particularly problematic, but proponents of AI now like almost any metric of intelligence or anything else for that matter. If you believe in AI, you must believe in the efficacy of metrics because they are essential to the success of AI, despite evidence to the contrary.
Common knowledge is always undergoing change and thus arguments that worked decades before may no longer work. The tech bros have pushed for many of the changes mentioned above with help from other supporters of the startup system, including business schools and big consulting firms. These new aspects of common knowledge make it much harder to deflate the expectations for AI, and thus the bubble persists.
