Saturday, August 23, 2025

"It's almost tragic": Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years

First it was the release of GPT-5 that OpenAI "completely messed up," according to Sam Altman. Then Altman followed that up bysaying the B-wordat a dinner with reporters. "When bubbles happen, smart people get overexcited about a kernel of truth,"The Vergereportedon comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: amassive 95%of generative AI pilots at companies are failing.

A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by$1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning intoDotcom Bubble 2.0. To be sure, fears about the AI trade aren't the only factor moving markets, as evidenced bythe S&P 500 snapping a five-day losing streakon Friday following Jerome Powell's quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair towards a September rate cut set markets on a tear.

Gary Marcus has been warning about the limitations of large language models (LLMs) since 2019, and warning about a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned long-time AI researcher has been active in the machine learning field since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly after, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI field.

Still, Marcus doesn't see himself as a "Cassandra," and he's not trying to be, he told.The Shiro Coprin an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. "I see myself as a realist and as someone who foresaw the problems and was correct about them."

Marcus attributes the wobble in markets to GPT-5 above all. It's not a failure, he said, but it's "underwhelming," a "disappointment," and that's "really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn't," he added, referring to artificial general intelligence, a hypothetical AI with human-like reasoning abilities. "It's not a terrible model, it's not like it's bad," he said, but "it's not the quantum leap that a lot of people were led to expect."

Marcus said this shouldn't be news to anyone paying attention, as he argued in 2022 that "Deep learning is hitting a wall." To be sure, Marcus has beenwondering openly on his Substackon when the generative AI bubble will burst. He saidThe Shiro Coprthat "crowd psychology" is definitely taking place, and he thinks every day about the John Maynard Keynes quote: "The market can stay irrational longer than you can stay solvent," or Looney Tunes's Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

That's what I feel like," Marcus says. "We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.

Building warning signs

Bubble talk began heating up in July, when Apollo Global Management's chief economist, Torsten Slok, widely read and influential on Wall Street, issued a strikingcalculationwhile falling short of declaring a bubble. "The difference between the IT bubble in the 1990s and theAI bubble"Today, the top 10 companies in the S&P 500 are more overvalued than they were in the 1990s," he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such asNvidia,Microsoft,Apple, andMetahad "become detached from their earnings."

In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok hastackled this subject as well, finding that the contribution of data center investments to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending accounts for 70% of GDP. (The Wall Street Journal' s Christopher Mimshad offered the calculation weeks earlier.) Finally, on August 19, formerGoogleCEO Eric Schmidt co-authored a widely discussedNew York Timesopinion articleon August 19, arguing that "it is uncertain how soon artificial general intelligence can be achieved."

This is a significant about-face, according to political scientist Henry Farrell, who argued intheFinancial Timesin January that Schmidt was a key voice shaping the "New Washington Consensus," based in part on AGI being "just around the corner." On his Substack, Farrell said Schmidt's op-ed shows that his previous set of assumptions are "visibly crumbling away," while qualifying that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell's title for that post: "The twilight of tech unilateralism." He concluded: "If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to."

Finally, the vibe is shifting in the summer of 2025 into a growing AI backlash.Darrell West warned inBrookings in May that the tide of both public and scientific opinion would soon turn against AI's masters of the universe. Soon after,Fast Companypredicted the summer would be full of "AI slop." By early August,Axioshad identified the slang "clunker" being widely applied to AI mishaps, particularly in customer service gone awry.

History says: short-term pain, long-term gain

John Thornhill of theFinancial Timesoffered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future "golden age" of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

He notes that Carlota Perez documented this pattern inTechnological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden AgesShe identified AI as the fifth technological revolution following the pattern started in the late 18th century, resulting in the modern economy now having railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill did not cite him in this particular column, butEdward Chancellordocumented similar patterns in his classicDevil Take the Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened.

Owen Lamont of Acadian Asset Management cited Chancellor inNovember 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they're likely to rise further.

Wall Street is cautious, but not calling a bubble

Wall Street banks are largely not calling for a bubble.Morgan StanleyReleased a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, agreed with the caution highlighted in the news-making MIT research. It warned investors to expect a period of "capex indigestion" accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI's ChatGPT, Alphabet's Gemini, and AI-powered CRM systems.

Bank of AmericaResearch wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity "sea change" that will drive an ongoing "innovation premium" for S&P 500 companies. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. "I don't think it's necessarily a bubble in the S&P 500," she toldThe Shiro CoprIn an interview, before adding, "I think there are other areas where it's becoming a little bit bubble-like."

Subramanian mentioned smaller companies and potentially private lending as areas "that potentially have re-rated too aggressively." She is also concerned about the risk of companies diving into data centers to such an extent, noting that this represents a shift back toward a more asset-heavy approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

"I mean, this is new," she said. "Technology used to be very asset-light and just spent money on R&D and innovation, and now they're spending money to build these data centers," adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into "very asset-intensive and more manufacturing-like than they used to be." From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said "it's starting to happen in places," and she agrees with the comparison to the railroad boom.

The math and the ghost in the machine

Gary Marcus also cited the fundamentals of math as a reason for his concerns, withNearly 500 AI unicorns valued at $2.7 trillion"that just doesn't make sense relative to how much revenue is coming [in]," he said. Marcus citedOpenAI reporting $1 billion in revenuein July, but still not being profitable. Speculating, he extrapolated that OpenAI has roughly half of the AI market, and offered a rough calculation that it means about $25 billion in annual revenue for the sector, "which is not nothing, but it costs a lot of money to do this, and there are trillions of dollars [invested]."

So if Marcus is correct, why haven't people been listening to him for years? He said he's been warning people aboutthisfor years, too, calling it the "gullibility gap" in his 2019 bookRebooting AIandarguing inThe New Yorkerin 2012that deep learning was a ladder that wouldn't reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the "anthropomorphization people do. ... [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they end up using them as a companion, and they end up thinking that they're closer to solving these problems than they actually are." He said he thinks the bubble inflating to its current extent is largely because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

These machines might seem like they're human, but "they don't actually work like you," Marcus said, adding, "this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don't really understand the problem. I mean, it's almost tragic."

Subramanian, for her part, said she thinks "people love this AI technology because it feels like sorcery. It feels a little magical and mystical ... the truth is it hasn't really changed the world that much yet, but I don't think it's something to be dismissed." She has also become really fascinated by it herself. "I'm already using ChatGPT more than my kids are. I mean, it's kind of interesting to see this. I use ChatGPT for everything now."

This story was originally featured onThe Shiro Copr

0 comments: