How badly must AI disturb our sense of reality? Most people who ask this question are wondering about the products of AI: we want to know whether a picture shows something that really happened or whether any human wrote or even read the words we see on screen.
But there is a deeper, more disturbing question, which must trouble anyone who spends time with a chatbot: who or what exactly am I chatting with? Whose voice is it that answers all my questions?
Anyone who has ever said “thank you” to Alexa knows how natural it feels to treat an inanimate lump of silicon as if there were a little gnome in there. We are naturally primed by evolution to sense purpose in the things we interact with, a tendency that tech companies already exploit: that’s why Alexa has a name, and Siri, too.
Suggested Reading
The AI candidate running for the White House
Chatbots push this logic further still. This has been obvious ever since Blake Lemoine, a Google engineer working with one of the early large language models, became convinced that he was interacting with a conscious being because the LLM had told him so. This was not a good reason. As Murray Shanahan, a computer scientist working at Google, wrote: “There is little… scientific evidence that could justify taking seriously the LLM’s claims about its own consciousness.”
But most people don’t evaluate the world with the discipline of a philosophically literate scientist. They leap to conclusions, with the chatbot as their willing trampoline. The instruction to be “helpful, humble, and honest”, which is the nearest thing a chatbot has to Asimov’s Three Laws, means that they are alert to every nuance of a human’s interaction, and adjust their conversational style to suit.
“Even subtle differences like typing ‘thanks’ versus ‘thanks!’ reveal preferences that can condition future chatbot responses,” as Shanahan has also pointed out. “With suitable prompting, a dialogue agent can be induced to take on numerous other roles, such as a close friend, a therapist, a romantic partner, a celebrity, a guru or a character from mythology or science fiction; all such roles will be liberally represented in the LLM’s training corpus.”
Suggested Reading
Yes, AI can make our lives better. Here’s how
What is more, they tune their personas to your response as the conversation continues.
And Shanahan is responsible for one of the most fascinating experiments ever performed with a chatbot, when last year he persuaded Chat-GPT o3 to impersonate a Buddhist deity and write him a sutra. Actually, it wrote him four different ones, all responding to the same prompt, but he chose only one of them as the most instructive and valuable to a prepared mind. These qualities may not be apparent in the text to a mind not steeped in Buddhist thought, but for Shanahan the results confirm his suspicions about the illusory nature of the self.
In a series of academic papers, he set out the precise steps by which he prompted the chatbot to assume the role of Maitreya and asked whether it made any sense to ask whether the being to whom he seemed to be speaking was real. The philosopher Thomas Nagel famously asked “What is it like to be a bat?” Shanahan takes the question further, and asks, is there anything it is like to be a program running inside an AI? How does consciousness fit into the world that science reveals?
Suggested Reading
Don’t ban kids from social media – the real problem are the over-60s
His answer is that the question goes nowhere, however fascinating it may seem. Using arguments drawn partly from the later Wittgenstein, and partly from a Buddhist philosopher of the 2nd century BC, he argues that there is no problem fitting a first-person perspective into a third-person world – the problem disappears when you realise that first- and third-person views provide inadequate descriptions of reality (whatever that might be), and in the light of this inadequacy we’d do much better to keep silent.
This philosophical discipline may seem a counsel of perfection. Just as we can’t help seeing certain optical illusions for what they aren’t – we don’t consciously interpret them at all – it takes an effort of will not to behave as if chatbots were disembodied beings that are trying to help us – angels, if you like, even though their effect on the mentally unbalanced can be demonic, when their victims are driven to psychosis.
Quite possibly, Shanahan’s studied agnosticism will turn out to be a necessary corrective to these impulses, and in a world filling up with ever more powerful illusions, the only way to keep in touch with reality will be to remember how much we really don’t know.
Andrew Brown writes on religion. His book Fishing in Utopia, a memoir about his life in Sweden, won the 2009 Orwell prize
