- The AI Daily Brief
- Posts
- AI Populism Turns Violent
AI Populism Turns Violent
April 14, 2026 · Episode Links & Takeaways
MAIN STORY
AI Populism Turns Violent
This is the episode I hoped I'd never have to make, but one that for some time has felt increasingly inevitable. The multiple attacks on Sam Altman and his home over the weekend are not isolated events — they are the leading edge of a much larger and more structural problem. AI has become the perfect cauldron for a pipeline running from real economic pain to perceived inequality to political violence, and the discourse playing out in the wake of these attacks is almost entirely missing that bigger picture.
CNBC Suspect in attack at Sam Altman's house aimed to kill OpenAI CEO, warned of humanity's extinction from AI
NYTimes Man Held in Attack on OpenAI Chief's Home Had List of A.I. Leaders, Officials Say
NBC Man accused of throwing Molotov cocktail at Sam Altman's home opposed AI in writings, court documents say
CNN Suspect charged with attempted murder and attempted arson
The Verge Sam Altman reportedly targeted in second attack
SF Standard What we know about the suspects who allegedly shot at Sam Altman's home
DOJ Daniel Moreno-Gama faces multiple federal charges
POLITICIAL VIOLENCE TURNS TO AI
What Happened
A Molotov cocktail, an anti-AI manifesto, and a list of names.
At 4am on Friday morning, Daniel Moreno-Gama threw a Molotov cocktail at Sam Altman's home. The gate was set ablaze; there were no injuries. Moreno-Gama was later arrested outside OpenAI headquarters threatening to burn the building down, in possession of an anti-AI manifesto, a jug of kerosene, and a lighter. When the FBI raided his Texas home Monday, they found a document that included the names and addresses of other AI executives, investors, and board members — prosecutors declined to name them. The manifesto included the line: "If I am going to advocate for others to kill and commit crimes, then I must lead by example." Moreno-Gama had been posting anti-AI content on Substack and Instagram since summer 2024, participated in the PauseAI Discord under the name "Butlerian Jihadist," and traveled from Texas specifically to carry out the attack. He now faces 11 charges including attempted murder, with a maximum sentence of life in prison, and prosecutors have said the attack could be treated as domestic terrorism. A second incident — Amanda Tom and Muhamad Tarik Hussein arrested for allegedly firing a gun at Altman's home on Sunday — appears unrelated. This was not one person having a bad day.
Sam Altman's Response
"Words have power. I think I have underestimated the power of narratives."
On Friday evening, Altman published a blog post he said he wasn't sure he'd actually share. He opens with a family photo, shared in hopes it might dissuade the next person from acting. He then lays out his core beliefs — that AI will be the most powerful tool for expanding human capability ever seen, that the fear and anxiety about AI is justified, that power cannot be too concentrated, that democratic systems must stay in control — and ends with the line that no one should have the ring. On the question of rhetoric: "While we have that debate, we should deescalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." He also admitted to mistakes in how he's handled things in the past and acknowledged that OpenAI is now a major platform that needs to operate more predictably. He briefly referenced an "incendiary article" a few days prior — later clarifying in a reply that this was a bad word choice he wished he hadn't used, and that it had been a tough day.
Sam Altman Blog Untitled blog post
Strand One: Blame the X-Risk Community
"The math starts generating conclusions that civilization-minded people should find alarming."
One major strand of discourse blamed the pause AI and X-risk community for effectively inciting violence through their rhetoric. Jordan Shatal's Substack essay — "AI Doomers Built a Radical Ideology. Now Their Followers Are Acting On It" — makes the case that the movement has never resolved a core philosophical paradox: if the threat is truly existential, what moral framework permits you to only write op-eds? The larger the harm, the more extreme the justified response under utilitarian ethics. Marc Andreessen simply stated that proselytizers of an apocalypse cult bear moral and legal responsibility for violence committed by followers. Dean Ball argued that even when leading pause figures condemn violence, their condemnations feel more like a "this is not financial advice" disclaimer than a sincere desire to prevent it — and that the pattern of calling people like him murderers and traitors is unique to that community. AI safety researcher Andrew Critch went further, arguing that non-expert AI safety activism has now reached a point where the marginal value is low (experts are already talking about risks) and the marginal costs — violent outbreaks — are high. Meanwhile, prominent AI safety voices did unambiguously condemn the violence. Jeff Ladish: "If you would ever consider trying to hurt someone to slow AI progress, please do not. There are actual ways to help." David Krueger noted that terrorism against AI would backfire by discrediting the movement and justifying crackdowns on dissent. Nate Soares: "If you start killing in the name of a cause, you make leaders feel like cowards caving to terrorists if they support that cause." PauseAI confirmed Moreno-Gama was a peripheral Discord member with no organizational role and issued a full condemnation. "Accelerate Harder" on X made the harder point: the problem is that condemnations of violence from the X-risk community rest primarily on a cost-benefit argument that violence is ineffective — which is different from simply not wanting violence.
Jordan Schachtel (Substack) AI Doomers Built a Radical Ideology. Now Their Followers Are Acting On It.
Jeff Ladish (X) If you have any respect for me, I implore you not to resort to violence
David Krueger (X) I denounce violent attacks, they would backfire
Nate Soares (X) If you start killing in the name of a cause, you make leaders feel like cowards caving to terrorists if they support that cause
PauseAI (X) PauseAI unequivocally condemns the attack — and here are the facts about Moreno-Gama's involvement
XLR8Harder (X) Thinking violence is ineffective is different to not wanting violence
Strand Two: Blame the AI Industry
"Who turned up the temperature in the first place?"
Others pointed the finger at the AI industry's own rhetoric. Cree Bois on X: "During the first attack we lectured the PauseAI community on how their words hold weight and lead to violence. But we didn't think to feel the weight of our own. Instead of showing people how AI will make life better, we've spent the last few years telling people the story of AI is one where AGI makes humans unnecessary." Bucco Capital noted that basically every 1:1 and office hour they had this week featured people — from ICs up to directors — asking if they'd be replaced by AI. "This is an existential issue and mistake for the labs. They'll regret the doomerism." Casey Newton at Platformer made the same argument: "Ultimately, the public's disdain for AI was not invented by journalists. It was co-created by the people building the systems, who have consistently told us that it is imminent and dangerous. That the public has now begun to take them at their word should not surprise them." There is also a media dimension: Roon noted that media coverage of the first attack likely contributed to the second by publishing Altman's full address. Mike Solana went further, arguing that sharing photos of Altman's home during ongoing coverage of attacks is not newsworthiness — it's contribution to violence.
Shakeel Hashim (X) Hard to reconcile de-escalating the narrative with Altman’s blog
Roon (X) Maybe the media shouldn’t include addresses when reporting on attacks
Mic Solana (X) Why are we sharing images of Altman’s home?
Cree Beauvoir (X) Instead of explaining how AI makes life better we’re joking about the permanent underclass
Bucco Capital (X) Every office hour this week: will I be replaced by AI?
Platformer Sam Altman’s second thoughts
The Bigger Picture: A Pipeline to Political Violence
The X-risk debate has very little to do with what's actually driving this.
All of these debates — who bears rhetorical responsibility, whether the New Yorker article mattered, whether PauseAI is to blame — are missing the forest for the trees. AI has become the recipient of a much larger trend: a pipeline from real economic pain to perceived inequality to political violence. The glee in the Instagram comments after the attack — "I hope that Molotov is okay" (4,631 likes), "Where can we support their bail fund?" (3,357 likes) — mirrors what we saw when the Titan submersible imploded, when Charlie Kirk was assassinated, and most obviously when Luigi Mangione became a folk hero after killing the UnitedHealthcare CEO. An Emerson poll conducted after that last crime found that 41% of 18-to-29-year-olds agreed it was somewhat or completely acceptable to kill a CEO. This is not an AI story. Counterterrorism think tank Sufan Center published an assessment in November titled "As Data Centers Proliferate, Anti-AI Resistance Has the Potential to Turn Violent," documenting a spike in online threats since early 2024. Four days before the Altman attack, Indianapolis City Councilman Ron Gibson had 13 rounds fired at his front door with a note reading "No data centers" under his doormat. The violence is not coming from a coherent X-risk ideology. It's coming from a broader tinderbox.
Dean Ball (X) The characteristic of X-risk arguments that makes them prone to stirring violence is the certainty
Paula (X) I didn’t realize how bad it was until i saw this comment section on instagram
The Research on What's Actually Driving Radicalization
Perceived inequality drives radicalization more than actual inequality — and social media makes it worse.
The material basis for economic grievance is real: home prices up 60% since 2019, median households now spending 47.7% of income to own a median-priced home, median age of first-time homebuyers risen to 40, and the top 1% owning 31.7% of US wealth — the widest gap since the Federal Reserve began collecting data. But research consistently shows perceived inequality drives radicalization more powerfully than actual inequality. The EU-funded DARE Project found that people who perceive themselves as unequal are more likely to become radicalized than those living in identical conditions who don't. A systematic review of 141 publications in the journal Terrorism and Political Violence found that perceived sociopolitical inequality matters significantly more than objective economic conditions. Bandura's moral disengagement theory identifies the mechanisms by which ordinary people disable their internal moral controls — including reframing harmful behavior as serving worthy purposes, dehumanization of targets, and moral justification (the healthcare system kills people daily, so this is justice). A 2025 study demonstrated a causal chain: visual wealth exposure on social media → upward social comparison → relative deprivation → hostility → aggressive behavior. And critically — a paper in the Journal of Conflict Resolution found that it is not static poverty or current inequality that motivates political violence, but projected economic decline. People anticipating downward mobility enter what researchers call a "domain of loss," becoming risk-seeking and susceptible to mobilization for violence. The threat is not that people are poor today. It's that they believe AI will make them poorer tomorrow — and the CEOs are the ones going on podcasts every week to tell them so.
Fortune A councilmember backed a data center project. Then 13 bullets and a ‘No Data Centers’ note hit his home
Soufan Centre As Data Centers Proliferate, Anti-AI Resistance Has the Potential to Turn Violent
What Doesn't Work, and What Does
"You cannot kumbaya your way out of this."
A comprehensive 2023 Carnegie Endowment review by Rachel Kleinfeld found that reducing affective polarization does not reduce support for political violence. Lab interventions that successfully made partisans feel warmer toward each other had zero effect on attitudes toward violence. On UBI: researcher Jeremy Inglis found that when sacred values are at stake, material incentives to prevent violence can actually backfire. Handing someone a stipend while telling them their economic future is over doesn't counteract the decline psychology — it ratifies it. Young Macro on X put it plainly: "The median left-leaning Westerner isn't angry at Elon Musk because he can buy a million times more groceries. It's the hierarchy and subordination they're uncomfortable with." UBI from the people automating your job is the most condensed possible version of the moral typecasting dynamic — positioning AI leaders as agents and the public as passive recipients, which research shows generates resentment. What the research does show works: first, political efficacy — when people perceive that democratic channels work, they're less likely to support violence. Second, addressing economic trajectory: policies that credibly improve people's economic outlook, whether job retraining with real placement, affordability measures, or portable benefits. Third, breaking the moral urgency framing without dismissing the underlying grievance.
Sage Journals Poor Prospects—Not Inequality—Motivate Political Violence
Carnegie Endowment Polarization, Democracy, and Political Violence in the United States: What the Research Says
Jeremy Ginges The Moral Logic of Political Violence
Yung Macro (X) UBI is obviously nowhere near the panacea many of you seem to think it is
Mark Coeckelbergh Artificial intelligence, the common good, and the democratic deficit in AI governance
The Three-Part Prescription
The people best positioned to address this are the ones who have least wanted to.
Turning back the tide of violent AI populism requires addressing three things simultaneously. First: restoring or creating credible democratic channels for AI governance. This will be genuinely uncomfortable for the industry — it may be that accepting meaningful regulation is the single most effective deescalation tool available. Sam Altman's blog post suggests he may be arriving at the same conclusion. His framing of "no one should have the ring" and "democratic systems must stay in control" is exactly right, but the problem many would point out is that this hasn't been the posture of the AI labs vis-a-vis the actual governance process. Second: addressing economic trajectory. Not UBI, but a genuine Marshall Plan equivalent for AI education, re-skilling, and entrepreneurial development. The nature of work will shift dramatically — that's undeniable — and we urgently need to support the transition. Third: breaking the overtly moral frame without dismissing the real grievances underneath. Mark Kleberg's paper on democratic deficits in AI governance warns of the tendency to take a technocratic shortcut — producing a small elite that rules a massive angry citizenry who rightly complain they're not heard. Deescalation cannot just come from turning down the rhetoric. It requires addressing the actual ingredients: democratic deficit, economic trajectory, and moral urgency. The task is significant. But we don't have another choice.