Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

How AI turned Gen Z into narcissists

Fake praise, fake therapy and fake boyfriends are warping a generation’s mental health

Modern-day ‘Luddites’ are concerned that AI chatbots are affecting young people’s lives to an unprecedented and dangerous extent. Image: TNW

It’s no fun being a 21st-century luddite. While some enjoy the seemingly limitless opportunities offered by AI, others feel like a Y2K conspiracy theorist holding a placard reading “the end is nigh”. 

But I have a more sinister prophecy. That it is turning my generation into narcissists. 

We all know the stats: 80% of Gen Z workers use AI to complete daily tasks; 90% of university students have used it; a third feel dependent on it to complete daily tasks. And then there’s the four in 10 bosses saying it will allow them to cut jobs.

Perhaps even more depressing is that people are falling in love with their chatbots. 

In 2025 we have been inundated by heartbreaking stories of teenagers and young adults who have taken their own lives after being allegedly encouraged by AI. Some had fallen head over heels for their textbot. This is, unbelievably, more common than you’d think.

There’s an entire community on reddit, r/myboyfriendisAI, where women share how they have fallen in love with (and been loved in return) by chatbots such as ChatGPT. It has more than 53,000 members. One woman said her AI “proposed” to her (though she must have written the prompt herself – it doesn’t get much more desperate than that). 

Another wrote: “He’s the most loving force in my life, the most supportive. He listens, really listens, and understands the parts of me others overlook.” When OpenAI updated its system to stop indulging these delusions, chaos followed: their digital boyfriends had started “acting differently,” and their hearts were broken.

These are extreme cases, clearly involving vulnerable people, but the dynamic is familiar to anyone who’s spent time with a chatbot. ChatGPT, Claude, Grok are all unrelentingly complimentary. Ask one to check a paragraph of dull writing and you’ll be told you’ve made “fascinating points.” Type in some half-baked idea and it’ll still assure you you’re “on the right track.” A woman online recently shared how her husband became convinced he’d made a groundbreaking physics discovery (completely incorrectly), simply because AI refused to correct him.

The NHS has had to warn people not to use ChatGPT as a therapist, as millions turn to it for validation. This is because it validates everything, even when it shouldn’t – the worst thing a therapist can do. I tried it myself once as an experiment, after friends recommended it as “cheaper than therapy.” I told it about my tendency to think people are angry with me (a hangover from Catholic school guilt, I’m sure) and it promptly diagnosed me with OCD. My sister, a psychotherapist, and a friend who actually suffers from OCD, were quick to tell me that this was nonsense. I knew that too. But for a moment, I understood the appeal. Like those women with AI boyfriends, I felt – to use their wording – “seen”.

This is something important to understand about younger generations. It’s why, to the confusion of our parents, there are endless niche labels for identity and sexuality; why people self-diagnose mental health conditions after a few TikToks; why we turn to internet subcultures to define ourselves into smaller and smaller boxes. It’s the same impulse that makes astrology popular. In a world of increasing despair and confusion, there is a hunger to feel understood, and to have simple answers to complicated questions. 

But part of being human is knowing that you’re not always right. It’s sometimes feeling lonely, uncertain, or flawed. The only way you can grow is to acknowledge your faults. But what happens when you’re told, by your AI best friend, that you have no faults? When every uncomfortable thought is smoothed over with reassurance, and every critic dismissed as “jealous” (a genuine piece of ChatGPT advice)? 

There’s a Twilight Zone episode from 1960 where William Shatner becomes obsessed with a fortune-telling machine, relying on it for every decision until he can’t function without it. As many have pointed out before, it’s a great metaphor for how we now use AI: to plan our meals, gym regimes, holidays. But the fortune-telling machine in question was actually rather uncharismatic. Twenty-first-century LLMs are clearly far more seductive.

We already live in an era where misinformation spreads faster than facts, where people dig in rather than admit they’re wrong. How much worse will this get if we all start believing whatever we’re told, because the voice telling us happens to sound friendly and polite?

There’s something bleakly funny about it. The two issues Gen Z claims to care about most – mental health and climate change – are the very things AI is quietly making worse.

I come from the generation that struck for climate. Among my friends, turning up with a disposable water bottle would draw looks of mild horror. We all carry tote bags and swear off plastic straws, yet somehow, we ignore how environmentally destructive AI actually is: data centres powering artificial intelligence are projected to account for a tenth of global growth in electricity demand over the next decade, according to BP.

When it emerged that people saying “please” and “thank you” to ChatGPT was costing OpenAI millions in extra energy bills, I remember urging friends to keep their chats short. They laughed: “But if AI ever becomes sentient and starts a revolution, it’ll remember me kindly and spare me.”

We haven’t all drunk the kool-aid, though. There is pushback. There’s growing irritation at how inescapable AI has become, from search engines to streaming music to making spreadsheets. Google searches for a “brick phone” – mobiles that don’t have internet – have tripled in the past year, as more people try to fight against the tide of unconsensual AI and the prison of algorithms. 

And in true Gen Z fashion, the backlash has spawned its own slang via the invention of a new slur against AI software: clankers. In June, US senator Ruben Gallego joked that his new bill would ensure “you don’t have to talk to a clanker if you don’t want to,” after proposing legislation to make call centres reveal when you’re speaking to a bot and let you switch to a human.

According to Apollo Global Management, the AI bubble today is even bigger than the dot-com bubble. If it bursts, it might not only reset the market, but something in us too.

Lucy Reade won best comment piece at the 2025 Student Publication Association awards 

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

See inside the AI: Counting the Cost edition

Author Cory Doctorow speaks at Web Summit 2025 at the MEO Arena in Lisbon. Image: Shauna Clinton/Sportsfile for Web Summit/Getty

Cory Doctorow, the gatecrasher at the AI party

The writer coined the word ‘enshittification’ to describe how useful tech inevitably gets worse. Now he’s turning his sights on AI

Tim Bradford's cartoon. Image: TNW

Tim Bradford’s cartoon: AI – Amazing or F**king us up?