Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Is Google’s AI even more dangerous than Musk’s?

The search giant's AI Overviews is publishing outright falsehoods - potentially driving people into the arms of the far right

Image: Beata Zawrzel/NurPhoto via Getty Images

The dangers of Elon Musk’s Grok AI platform – which earlier this month went full-on Nazi, telling users how Hitler would be the best person to deal with Jews “every damn time” – is well-established. But less known are the less overt but equally pernicious lies being peddled by Google’s own AI platform.

The search giant is pushing Overviews, which summarise a search result with a block of text, heavily. The AI summaries supposedly give users all the information they seek without ever needing to click through to the original source of the content – an existential threat to established providers of news content.

But it’s even more concerning when that information is not only bollocks, but dangerous bollocks. And Bluesky user Nearly Legal pointed out one this week which plays right into the hands of the British far right.

Asked about council housing tenants being evicted in order to house asylum seekers – a claim regularly peddled on local Facebook groups – Google’s AI said that it is “a complex issue with legal and ethical considerations when council tenants are evicted to house asylum seekers. While the law allows for this in certain situations, it raises concerns about fairness and the impact on existing tenants”.

Except it isn’t a “complex issue with legal and ethical considerations”. Council housing tenants cannot legally be evicted in order to house asylum seekers – full stop, period, end of. 

Which, in other circumstances, is the sort of AI rot which would make lazier law students fail their exams. But in these, it’s the sort of misinformation to push people towards the lies of the far right and worse. As the writer behind Nearly Legal puts it: “We have to acknowledge the outright risk that lies in apparently confirming a far right fabulation at the top of a Google search. This is quite some way beyond telling you to put glue on your pizza on the scale of hazard.”

Meanwhile, new analysis by the analytics company Authoritas has found that a site previously ranked first in a search result could lose about 79% of its traffic for that query if results were delivered below an AI overview. So it’s only going to get worse…

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.