It seems the markets, those who invest in them, and those who commentate on them, have finally woken up to what anyone who knows anything about the subject has been saying for a long time: AI is a bubble that is going to burst. That was the talk going around the meeting on AI at the Royal Society a few weeks back – perhaps because that gathering was populated by hard-headed sceptics of AI rather than the credulous boosterists and cheerleaders who generally represent the industry.
According to the Bank of England, the growth of stock values in AI has become unsustainable, and “the risk of a sharp market correction has increased”. The bubble has been inflated in part by AI and tech companies massively investing in one another (OpenAI and the chip manufacturer Nvidia are particularly culpable), creating a fragile web of interdependence which could crash the global economy when the bubble bursts.
Every new technology is susceptible to hype. But the degree to which AI companies have been able to hoodwink the press, governments and investors is so remarkable that it warrants examination. Maybe it has something to do with the eagerness to anoint the tech bros of Silicon Valley as far-sighted geniuses, when the truth is that much of what they claim – that, for example, AI works in the same way as the human mind and will therefore attain comparable intellectual capabilities simply by scaling it up – is known by experts such as cognitive scientists to be absurdly naive.
The AI and information technology industry has fostered a cult of personality that seems to leave the tech press starstruck and unquestioning of nonsensical claims – for example, that AI will help us end all disease within a decade (that’s OpenAI’s CEO Sam Altman) – leaving scientists face-palming.
I suspect the bubble has also been inflated by the way AI can be all too plausibly portrayed as a hinge point in history: the dawn of a new phase of human (or posthuman) existence. Generative large language models like ChatGPT often do such a good job of mimicking human interaction that we can hardly resist ascribing the algorithms all kinds of powers, such as true understanding of concepts, they don’t really possess. We are too easily wowed by the claim of tech lords like Altman that artificial general intelligence – machines that can match or outdo us in any cognitive task – is just around the corner. (It isn’t.)
Suggested Reading


Could you ever love a robot?
The fear now unsettling the market is that the returns on astronomical investment in AI are proving rather feeble. Clients are finding that AI doesn’t do much to boost profits. A recent report from the Massachusetts Institute of Technology said that 95% of businesses that had taken on AI haven’t yet seen any return from it.
But what did they expect? Many didn’t really know. Even some of the truly useful (and highly specialised) applications have been wildly oversold. The protein-structure AI called AlphaFold, developed by Google DeepMind, won its architects a Nobel prize last year and is genuinely helpful in scientific research. But headlines claiming it would revolutionise drug discovery were ridiculed by those working in that business.
AI is already assisting scientific and medical research, as well as other tasks in data analysis and management, in many ways. But much of that is old-fashioned “machine learning”, in which the algorithm looks for patterns in data sets too large or complex for humans to parse. Yet it’s the generative LLMs, which create “novel” outputs like text, images, music, that have driven much of the AI frenzy.
They have their uses but are beset by copyright and other ethical problems and are churning out literally mindless slop that is polluting the infosphere. Increasingly, generative AI might end up training on data that other AI has produced, a cycle that has been shown to spiral rapidly into gibberish.
If I sound jaded, it’s because the AI bubble has demonstrated how disconnected the tech industry, garbed in the authoritative trappings of science, is from actual scientific expertise. Who wants to hear caveats when you have a product to sell? Yet when the bubble bursts, we will all pay the price.
Perhaps, however, a lesson will be learned. There’s a danger that the nascent quantum computing industry might become caught up in another cycle of over-promising and hype, when the reality is that these futuristic-sounding machines, “faster than the best supercomputer,” could simply be of great value to a relatively limited range of problems. Don’t get fooled again.