Merriam-Webster’s word of the year for 2025, “slop”, captures a growing unease about artificial intelligence: it defines the term as low-quality digital content produced at scale by AI. The choice reflects a tension at the heart of the current boom, as businesses rush to deploy generative AI while the economic and social downsides become harder to ignore.
Critics argue that the basic economics of today’s AI industry are shaky. Commentator Ed Zitron says the “unit economics” – the cost of handling a single user’s requests versus what companies can charge – “just don’t add up” and describes them in far less polite terms.
Revenues from AI are growing quickly as more paying customers come on board. Yet they are still far from covering the massive investment under way: $400bn (£297bn) in 2025, with even higher sums projected for 2026.
Author and activist Cory Doctorow is blunt about the implications. He argues that the leading AI companies are fundamentally loss-making, kept afloat only by “hundreds of billions of dollars in other people’s money” that are being burned to sustain the boom.
Losses are not unusual for frontier technologies. What is different, critics say, is that profitability is not getting closer as systems improve. Each generation of large language models (LLMs) has tended to demand more data, more energy and more highly paid specialists’ time, pushing costs higher instead of lower.
The infrastructure supporting this expansion is also expensive. The vast datacentres required to train and run LLMs are often financed with debt, backed by expectations of future revenue. Bloomberg analysis estimated that datacentre credit deals reached $178.5bn in 2025 alone, with new and relatively inexperienced operators joining established Wall Street players in what it called a “gold rush”.
These facilities are heavily reliant on high-end chips from Nvidia and others, which have a limited useful life that may be shorter than the loans used to pay for them. That mismatch adds another layer of risk.
Alongside heavy borrowing, the boom is increasingly characterised by complex financial structures. Analysts point to intricate, sometimes circular funding arrangements that echo earlier corporate bubbles, where optimistic narratives obscured fragile business models.
Those narratives are central to today’s AI story. Backers claim generative AI will ultimately generate enough revenue to justify the huge sums invested. LLMs are promoted not just as powerful tools for analysing and summarising information but as technologies on the verge of “superintelligence”, in the words of OpenAI chief executive Sam Altman, or as systems that could even replace human friendships, as Meta chief Mark Zuckerberg has suggested.
In the meantime, AI systems are already replacing some workers. Brian Merchant, author of Blood in the Machine, has collected testimony from writers, coders and marketers who say they were laid off in favour of AI output.
Many of those workers question the quality and safety of the systems taking their place, describing bland or error-prone work and warning of the risks when sensitive tasks are handed over to automated tools.
Recent high-profile missteps have underlined those concerns. In the UK, the high court issued a warning about lawyers’ use of AI after two cases in which completely fictitious case law was submitted. In Heber City, Utah, police officers learned they had to manually verify the output of a transcription tool used to summarise bodycam footage after it incorrectly recorded that an officer had turned into a frog – apparently misinterpreting audio from Disney’s The Princess and the Frog playing in the background.
These incidents do not capture the broader cost of what Merchant calls the “slop layer”: torrents of AI-generated content flowing through online platforms, making it more difficult to distinguish reliable information from fabrication or noise.
Doctorow argues that generative AI should be seen more modestly:
“AI isn’t the bow-wave of ‘impending superintelligence’. Nor is it going to deliver ‘humanlike intelligence’. It’s a grab-bag of useful (sometimes very useful) tools that can sometimes make workers’ lives better, when workers get to decide how and when they’re used.”
Viewed in this light, AI could still deliver meaningful productivity gains. The question is whether those gains will be large and rapid enough to justify today’s valuations and the scale of global investment. A shift in that perception would not just hurt Silicon Valley; it could shake financial markets worldwide.
The Bank for International Settlements (BIS) recently highlighted how concentrated market power has become. The so-called “Magnificent Seven” US tech giants now make up 35% of the S&P 500 index, up from 20% three years earlier. Any sharp reassessment of their prospects would have far-reaching effects.
A major share price correction would hit more than founders and venture capitalists. It would affect pension funds and retail investors in Europe and North America, Asian technology exporters that supply the sector, and the banks and private equity firms that financed the AI buildout.
In the UK, the Office for Budget Responsibility (OBR) recently modelled a “global correction” scenario in which UK and global stock markets fell by 35% over a year. It estimated that such a drop would reduce UK GDP by 0.6% and worsen the public finances by £16bn.
That would be less severe than the 2008 global financial crisis, when UK financial institutions were at the centre of the turmoil. But in an economy still struggling to regain momentum, it would be widely felt.
Any reckoning with the true costs and limits of generative AI is therefore unlikely to be contained within the tech sector. Even those who might welcome a humbling of big tech’s billionaire class would be exposed to the fallout from an AI-driven market correction.
