Time was, an assassination attempt on one of the world’s most high-profile business leaders would have elicited big headlines, handwringing and a spate of op-eds wondering “what has become of us?” Now, though, you could be forgiven for having completely missed the news that OpenAI CEO Sam Altman has been the subject of what look like two separate attempts on his life within a few days. What on earth is going on?
On 10 April, a 20-year-old man threw a Molotov cocktail at Altman’s home in San Francisco before threatening to set fire to OpenAI’s HQ (and kill everyone inside). Two days later, a man and woman in their 20s fired shots at Altman’s house from a car. In a blogpost published the day after the Molotov attack, the OpenAI CEO wrote that people on both sides of the AI debate “should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally”.
The degree to which “I Hate AI” has become a load-bearing part of many people’s personalities in 2026 can’t be overstated. Three years of seemingly-incessant coverage of the technology has bred not just contempt but abhorrence, and, for a certain portion of the population, even admitting to having once used ChatGPT to find out how to fix the dryer marks you down as a traitor to your species. We even have new words with which to belittle the tech’s supporters, with AI models and their proponents dismissed as “clankers”.
One can, though, understand why. AI is a technology which (as we’ve been told daily since 2023) has the power to completely reshape the world. Per current trajectories, it seems that that reshaping is mainly going to consist in making us all unemployed, using up all of the energy in the world and entirely destroying the epistemological foundation on which modern society’s functioning depends.
That’s if it doesn’t attain superintelligence first, of course, and decide to enslave humanity to fuel its dreams of paperclip construction, or just mulch us for fun.
While those are obviously somewhat hyperbolic, such doomsday scenarios have consistently been promoted by Altman himself. Writing back in 2015, he said: “I think AI will probably most likely lead to the end of the world, but in the meantime there’ll be great companies created with serious machine learning.”
In 2023, Altman signed a statement from the nonprofit Center for AI Safety, which said that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
On a podcast last year, he continued to express fears about what, exactly, he was building: “There are these moments in the history of science where you have a group of scientists look at their creation and just say, you know, what have we done?” he said. “Maybe it’s great, maybe it’s bad, but what have we done?”
Altman’s not the only CEO to have made such statements. Elon Musk, Anthropic’s Dario Amodei and even the more reserved Demis Hassabis of Google Deepmind have all at times expressed these sorts of existential concerns about what their tech might lead to.
Suggested Reading
Don’t ban kids from social media – the real problem are the over-60s
There was, particularly in the early days of the AI boom, a good business reason for this. If you’re telling governments that they need to worry about what your tech might do in the future if it becomes sentient, you’re also implicitly telling them not to worry so much about regulating the very real harms they might be causing right now.
What that has also done, though, is clearly cement in the minds of billions of people the very strong idea that Altman et al are building tech that might kill or enslave us all; on that basis, it’s not hard to imagine that some individuals might choose to take steps to prevent those doomsday scenarios from occurring.
The man accused of attempting to bomb Altman’s house is one Daniel Moreno-Gama, whose Discord username, “Butlerian Jihadist”, is a reference to the war against intelligent machines in the Dune novels. Moreno-Gama’s Substack posts predicted the extinction of humanity by AI, and when he was arrested following the attempted bombing he was carrying a “manifesto” that detailed his anti-AI beliefs and listed the names of other AI executives.
We are in the grip of an acknowledged worldwide mental health crisis. The economic precarity faced by millions, if not billions, across the west feels increasingly existential. Governments continue to flail impotently in the face of the increasingly-ineluctable climate emergency.
All the while, the world’s richest and most powerful smilingly seek to sell us a vision of a world in which machines do all the jobs, create all the value and attract all the investment. Is it any wonder people are starting to push back?
An industry which has spent years telling the public that its products may destroy jobs, truth and perhaps humanity itself shouldn’t be wholly surprised when some people begin to treat its leaders less as entrepreneurs and more as existential threats.
Killing Sam Altman, of course, would not only be morally wrong – it also wouldn’t make much difference to the spread of AI. Unless the tech companies and the governments that seemingly bought in to what they are selling start doing a better job of persuading us why the technology is going to do anything other than immiserate us or murder us, though, you can expect bulletproof gilets to be the must-have accessory on the streets of San Francisco for a few years to come.
