In a November 2023 essay, the billionaire technologist Bill Gates predicted an incoming wave of AI-fuelled change that he suggested would rip up the rules of how we use computers in the next five years – radically transforming what digital technologies can do for us.
“You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do,” he enthused, saying every web user will be able to have a truly useful AI assistant at their beck and call.
He wasn’t referring to basic chatbots like Apple’s Siri. Or even ChatGPT, the general purpose AI tool that went viral a year earlier – racking up hundreds of millions of regular users thanks to its ability to churn out something (anything!) in response to a user prompt.
Gates was anticipating an even more capable type of AI tool. One example he offered was the idea of a “travel bot” that would not only be able to pick out the perfect holiday for you (location, accommodation, flights, reservations) – but go ahead and book everything, too. No (human) travel agent necessary.
Since Gates penned the essay there’s been no end of industry hype about “agentic AI” – fixating on this idea of contextually smarter software that can make decisions and act on them – with scores of startups founded and funded off the back of a fresh VC frenzy. Meanwhile most of us are still using our devices and apps in much the same way as when he sat down to type.
Still, in the short run, technology hype cycles often overshoot expectations – so it might take us a while to be impressed by what AI-fuelled automation can do.
Agentic AI is “just an extension of traditional automation”, says Forrester Research VP and principal analyst, Craig Le Clair, who has written a book about the march of automation into our work lives. The difference is that a “level of AI competence” has been added to the mix, he explains.
Thing is, AI is often incompetent by anyone’s standards. Generative AI tools like ChatGPT frequently get things wrong because their outputs are only as reliable as the information that’s fed into training them. And there’s no such thing as a perfect data-set – unless you’re talking about an exceptionally narrow task.
This is why, much like a daydreaming kid, cutting-edge AI can literally just make stuff up. The technology is ‘all too human’ in that sense.
So relying on AI agents to think for us sounds like an inherently risky proposition.
“That is something we’re all dealing with,” admits Le Clair. “Trying to figure out how to make this work safely, with the right level of trust, with the right level of governance – there’s this whole area of thinking and change in psychology around automation.”
“Trust” problems he flags include thorny issues like explainability (how did the AI arrive at this result?); data security and privacy (how much information are we comfortable feeding in, especially if it could leak out in unexpected ways?); and liability and safety – or “how do you build guardrails that prevent nefarious outputs?” as Le Clair puts it.
There is also the sticky (legal) issue of who’s to blame if something goes wrong. Important decisions going horribly awry because we outsourced them to AI is all too easy to imagine. Just consider some of the monumental screw-ups involving far less fancy flavours of IT.
GenAI can appear responsive and capable but the problem is what comes out of these tools is unpredictable – so the risk is of AI-accelerated errors growing like weeds.
Earlier this year Bret Taylor, the chairman of ChatGPT’s developer OpenAI, offered up an example which shows how randomly messy things can get – recounting how, in response to an aggrieved customer, an AI agent had completely invented a refund clause. This sort of thing explains why businesses are looking at how to define rails for AI agents to run in more predictable ways. But constraints can shrink utility, too.
Taylor is also the founder of Sierra, a startup building AI agents for customer service – so of course he remains bullish – urging other entrepreneurs to keep calm and “narrow the domain that you’re working on so you can take these intractable problems and make them solvable”.
Suggested Reading


Why Big Tech has never been weaker
But a technology designed to solve smaller problems requires a bit of creative thinking to link it to the grander visions of AI agents.
The suggestion is that we’ll need a whole army of narrowly specialised AI agents – combined with another layer of AI to manage intra-agent chatter. Pieces of this interoperability architecture are starting to be tackled but ecosystem-building takes faith and time.
So far, Le Clair says there’s essentially nothing deployed that lives up to truly “Agentic AI” yet.
Forrester’s report highlights examples of businesses using AI-powered bots to perform tasks like data analysis or smarter customer support. Even these narrowly focused, “agent-like” AIs could deliver short-term efficiency gains.
When it comes to individual tech users, the AI agent story hasn’t really got going yet.
Attention remains focused on GenAI tools like ChatGPT – although their evolving capabilities suggests they, too, are becoming agentish. So while ChatGPT started out with text generation it has since added image and video, and can serve up software code at the press of a button, too.
Do we get pulled deeper – handing over more privacy and agency to have “deeply personalised” AI assistance as a central crutch of our lives? That feels like both an open question and a theoretical idea for now.
If this kind of AI proves effective, how deeply it becomes part of everyday life may depend on its purpose. Practical uses like health, therapy, or productivity might exert more pull factor than fleeting ones like entertainment, shopping, or travel – even though people often engage with AI most eagerly for fun, like turning selfies into Studio Ghibli characters or AI-generated action figures.
Granting actual decision-making power to a piece of software might feel too uncomfortable for many of us unless the task at hand really is trivial.
One thing is certain, “online privacy and security will become even more urgent than they already are,” as Gates’s essay acknowledges. Maybe it’s time we became more cautious of AI-driven convenience and started recognising the hidden headaches it can bring.
Natasha Lomas is a tech journalist who lives in Barcelona