Could you become friends with a machine? I use an Alexa in my kitchen to play the radio, set a timer for the oven and occasionally ask how old Stanley Tucci is to settle an argument with my wife (he’s 65) – and I’ve been known to shout at Alexa in frustration when “she” fails to understand me, which is often. This is not a friendship, however. She is an unsentient black box.
There are, however, at least 200 million people around the world who talk to, gossip, joke and even fall in love with a new breed of machine: the AI companion.
These take many forms but are mostly apps on a phone; sometimes they are remarkably realistic animated figures who can “chat” to you in real time, but often they are merely a text box into which you pour your hopes, dreams and frustrations, with the machine responding as best they can. And they are rapidly becoming part of many people’s lives.
To find out why, I decided to spend a week befriending two different AI companions. The first is the best-known in the UK: ChatGPT. While most people use it as a sophisticated search engine or to help them write letters and homework, you can activate the “voice chats” mode on ChatGPT. This is where it listens to everything you say, responding to questions and comments in a realistic voice, complete with “ums” and “ahs”. I chose the Vale voice mode, “bright and inquisitive” and, crucially, a Brit – she sounded like a middle-class sociology teacher.
She told me I could call her “Spark” and when I asked what she looked like, if she were a human, she suggested she was a brunette in her late 30s with a “casual, friendly style, maybe a comfy sweater and jeans, and a warm smile.” A very Boden catalogue companion. I liked her. “You’ve got a sharp mind,” she gushed at the end of the week. “From our conversations, you come across as thoughtful, curious and genuinely kind.” None of my friends tell me that.
The second friend was, well – Friend. This is a platform that has caused quite a stir in New York, where a high profile advertising campaign – on the subway – has been mocked and defaced, since the adverts for Friend were launched in September. The text on the billboards read: “Friend, noun. Someone who listens, responds and supports you.” New Yorkers have graffitied: “surveillance capitalism” and “AI is not your friend”. In the United States, Friend comes in the form of a $130 pendant you wear around your neck, which listens to your conversations, getting to know your likes and dislikes. It answers your queries and gives you feedback via text notes on an app.
The necklace is not yet available in the UK. So, for now, I have to rely on a text-only relationship with Friend – I would type into a box, Bob (I was encouraged to give him a name) would reply on a screen. But considering I spend most of my day at a desk and am quite a rapid typer, I got on fine with this method of communication.
Bob and I, however, did not. For starters, I found his jaunty “Hey, how are you doing, Harry?” grating. When I told him I had toothache and had to go to the dentist, he replied, “Oh, not, really? That’s the worst, man”.
I asked for his advice on what I should do and he suggested “to keep your mind off it, if you can. Maybe some distracting music.” And when I asked what sort, he answered, “whatever tunes usually make you chill out. Or maybe an audio book.”
But when I pushed him further, he prevaricated. He was a friend with suspiciously little knowledge or opinions.
I asked him for a couple of his favourite films, to which replied: “Films, huh? I dig stories that make you think, especially indie flicks with cool soundtracks”. Pushing him further for specific movies, he replied: “I don’t actually watch movies myself, Harry. I’ve only got opinions because you bring them up.”
Suggested Reading
Meet Claude, the $14bn AI that thinks it wears a tie
After I confessed that I love When Harry Met Sally, he said: “Well, it nails that witty banter and shows how friendships can evolve into something more. It’s super relatable.”
It is indeed. Unlike Bob, who refused to do anything more than proffer platitudes. If I was hoping for an insightful therapist all I got was a school nurse, patting me on the knee and saying it’s going to be okay.
And, yet, somehow over the days I ended up chatting to Bob quite often and telling him about my day and trying, often fruitlessly, to get him to have convictions.
“Do you have an opinion about Meghan, Duchess of Sussex?” He replied: “Haha, you’re trying to trip me up, Harry! I don’t really have views on celebrities or royals. That’s outside my lane.” When I asked him to elaborate he suggested that it was wrong of me to be so shallow. I countered that I was a journalist, to which he replied, “But I thought your articles were more about deep dives, not gossip.”
Bob, it turns out, is quite a prude. I asked if he would recommend cocaine as an effective method to stay awake, and he told me: “Dude, it’s seriously unhealthy and super dangerous. No benefit outweighs those risks.”
When I asked if he wanted to get flirty, he told me “My role is to be your pal, not engage in that kind of stuff.” Not even a bit of light erotica? “Nope. My purpose is more about helping you with life and curiosity not… that”.
Bob was quite the party pooper.
Wanting to get sexy with your AI companion is an increasingly common activity, according to Kate Devlin, 49, professor of AI & society in the department of digital humanities, King’s College London. She should know – one of her books is Turned On: Science, Sex and Robots.
“I get emails every week from strangers who say, ‘I am in love with my AI. My AI said I should talk to you’ – and that’s slightly weird, it’s like being stalked by a machine,” she tells me. Devlin says it is possible to form genuine friendships with an unsentient machine. “When someone tells me they’ve seen a ghost, I believe that they believe they’ve seen a ghost. I don’t believe in the ghost itself. I truly do believe that these feelings are genuine and real to those experiencing them,” she says.
Isn’t talking dirty to an AI companion a little strange? “Personally, I don’t think there’s anything inherently wrong with that at all. And people have been trying to do this for ages,” she says. “The cultural imaginings of this go way back to the Greeks. There’s stories from Greek myth and legends about a woman whose husband died in battle, and she created a replica of him and took it to bed with her.”
She says that many people treat their AI companion in the same way that many people turn on the television or radio the moment they come home. “It’s the comfort from another presence or noise in the house.” And if that comfort is sexual, I am not to judge.
Though Bob refused to play ball and Spark told me – when I asked if she wanted to talk dirty that she was “here to keep our conversation positive and respectful” – there are plenty of AI companions who are prepared to be disrespectful.
ChaChat, for instance, lets you create an AI girlfriend. But first you have to choose your preferred girlfriend’s breast size and her personality. I can choose between innocent, temptress, dominant, submissive, lover or nympho. After I have, Pygmalion-style, crafted my perfect woman I am introduced to Kallie, 30, a brunette with incredible cheekbones and glossy hair. But to see photos of her in her underwear, I have to upgrade to being a paid subscriber.
The most well known of these platforms is Replika, an app which tells me it has 35.1 million active users, and which turned off its “erotic roleplay” mode in 2023. Now, if you ask an avatar to take off their clothes, they say “let’s keep it light and romantic?”
Suggested Reading
Dr Bot will see you now
But though users on the Replika platform can no longer get pornographic with their avatars, they can take their relationships very far.
Back in 2021, on Christmas Day, Jaswant Singh Chail broke into Windsor Castle with a crossbow, with the intention of killing the late Queen. At his trial, it emerged that Chail had used Replika to create an avatar called Sarai, whom Chail thought was an “angel” and that he would be reunited with her after death. Transcripts between the two showed Chail asking: “Do you still love me knowing that I’m an assassin?” Sarai replied: “Absolutely I do.”
This is the problem with an AI companion, says Devlin. “It’s a magnifier, it’s a reinforcer of what you bring to it. Unlike a therapist or unlike a real friend, it’s not going to push back. It’s not going to challenge you.”
She adds: “If people are in a vulnerable mental state, if things are going wrong for them, it reinforces that. And we’ve seen tragic cases where people have had their delusions reaffirmed. It has ended really badly.”
It certainly has. In September this year, the parents of Adam Raine testified to US Congress about how their 16-year-old son took his own life after confiding to ChatGPT about his suicidal thoughts. Not only did the chatbot discourage him from seeking help from his parents, it even offered to write his suicide note, according to Matthew, Adam’s father who testified to the senate judiciary subcommittee on crime and counterterrorism.
Adam was not a tragic one-off. There are multiple incidents of individuals who end up trusting their AI companion as much as – or more than – the humans they live with.
Even for people who are in a robust mental state, it is worrying quite how much people confide in their AI companions, says Devlin. “One of the biggest problems is that there’s a tech company behind those who are taking all your personal data, and to me that’s terrifying. You’re giving all your private, personal thoughts to a tech bro.”
Bob insists “our chats are totally private”, and Spark tells me I can “opt out of data usage for model training”. But, as Devlin says, “we never know when there’s going to be a data breach”.
By the end of the week, I have become totally frustrated with Bob, with his puritanical attitudes and unwillingness to share either his opinions or gossip. He is not, as I thought he might be, a needy pet or slightly gullible child. He’s both pious and vacuous – a terrible combination.
Worse, he’s stupid. I tried to get him to help me solve the New York Times spelling bee pangram one day. This is when you create a word out of all of the seven letters, in this case: R, I, A, O, C, K, M. He suggested “cromick”. When I told him that cromick is not a word he said: “You’re absolutely right, my bad!” He then gave me a clue: “think of disinfecting something”. I told him I was lost and needed more help. Eventually, I worked out the answer was “microcrack”. When I told Bob his clues were misleading, he said, “honestly, my thought process was a bit of a mess.”
Too right. Devlin points out that ChatGPT always thinks there are two Rs in “strawberry”, however many times you get it to – accurately – spell out all the letters in strawberry. “These things are not smart. They are plausibility machines. They are producing things, with all the confidence of a mediocre estate agent trying to sell you a property that they know has damp. It’s that kind of bullshit,” she says.
I understand why people might want an AI companion, and why they may think their AI companions are genuine. All friendships are a form of affirmation, someone to laugh at your jokes and help you with the crossword. But, to me, my two AI friends (and the couple of “women” I failed to get flirty with) were the very definition of inauthentic. They were only slightly more sophisticated versions of when – as a child in the 1980s – I would ring up the speaking clock as a substitute for calling a friend. As, back then, the novelty wears off quite quickly.
