The Rise of the Anti-AI Movement

February 24, 2026 · Episode Links & Takeaways

MAIN STORY

The Rise of the Anti-AI Movement

The anti-AI movement — if you can call it that — is not one big organized thing. The reasons for it are not monolithic, and it would be reasonable to ask how much of it is media narrative. But the underlying sentiment is real, it's growing, and it would be a mistake for the AI industry to ignore it. Today's episode breaks apart the different categories of anti-AI sentiment, because the more precise we get about what people's concerns actually are, the more we can do to address them. And despite the instinct to cringe at the TIME Magazine cover, there are actually reasons for optimism in what these critics are saying.

THE MOSAIC OF AI RESISTANCE

The Data: Americans Are Skeptical
Whatever reasons you want to ascribe to it, there is definitely a base level of skepticism and concern among Americans — and it's growing.
A recent YouGov study found 58% of Americans don't trust AI (vs. 35% who do), 45% think AI's economic effect will be mostly negative, and 63% think it will decrease available jobs. A Pew Research poll ranked the US dead last among countries surveyed for the ratio of citizens more excited vs. more concerned about AI — just 10% more excited vs. 50% more concerned. Hundreds of citizens showed up to a New Jersey planning meeting and got a data center project canceled; the video got 5 million views.

Category 1: The AI Safety Folks
Unlike some others in the anti-AI space, these folks actually quite agree with the accelerationists on how powerful AI is — they're just very concerned about the implications.
The X-risk and P(doom) crowd. Their voice was much louder right after ChatGPT launched. Many operate in good faith, which creates room for discussion even if you fundamentally disagree. But as AI curator Andrew Curran noted, the primary driver of anti-AI sentiment now is not X-risk — it's concerns about employment and the impact on art.

Category 2: The Capability Skeptics
This is the group with the most frustration — and the one that will cause the most economic harm to individuals. The capability skeptics aid people's natural disinclination to engage with AI, and those people will be extremely far behind.
The "AI is just fancy autocomplete" crowd. Gary Marcus is the most prominent example. They update their "AI has plateaued" essays every time the media narrative shifts, despite each plateau being significantly more advanced than the last. A subset — the timeline skeptics — have a more reasonable point: that the actual diffusion pattern into the workplace will be much longer than we think.

Category 3: The AI Bubblers
It would be a completely coherent position to think AI is radically changing things and still not think the market is pricing the companies behind it correctly.
Not necessarily skeptical of capabilities — many believe in long-term disruption. They're skeptical of business models, valuations, and whether today's deal structures can be supported. Michael Burry of Big Short fame is the most prominent current example.

Category 4: The Artist Advocates
Some are frustrated that AI does what they used to do; others are concerned about copyright and IP; but a big group just has a general uneasiness about the fairness of things — and Supreme Court decisions about copyright aren't going to solve that.
The online artistic community has already dwindled in recent years with the slowdown of the etsy economy. They’ve weathered weaponized copyright strikes and baseless lawsuits. Their view that AI companies stole the entire corpus of modern western art reads as the last straw and they’re unlikely to be convinced otherwise.

Category 5: The Slop Secessionists
People don't dislike AI because they dislike slop — they dislike and consider AI output "slop" because they're already anti-AI. But it's enough of a cultural force to identify on its own terms.
See: the millions of YouTube commenters railing on TIME's 1776 Project by Darren Aronofsky for looking like AI slop.

Category 6: The Child Safety Advocates
This category of concern is invisible to many outside the circles where it's important, but it's maybe the very top of the list in certain groups and communities.
Particularly prominent in religious and conservative circles. Concerns about teen chatbot dependency, human relationship structures, AI's impact on child development. Austin-based pastor Michael Grayson is profiled in the TIME piece.

Category 7: The Data Center Deniers
The fact that data center community concerns haven't been addressed yet is a massive failure of both policy and imagination from the people building them.
Overlaps with environmental activists (water consumption, electricity) but is more local — people focused on electricity bills and community impact. Muskogee Nation activists Jordan Harmon and Mackenzie Roberts are concerned about data centers tangling with sovereign and native land rights. Georgia Public Service Commissioner Alicia Johnson isn't against AI — she wants data center economics to be fair. There is absolutely no reason data centers couldn't be some of the most pro-community businesses wherever they operate.

Category 8: Job Displacement
By far the biggest, most broad-based category with potentially the largest political footprint.
The concern is simple: if AI does everything better than us, what jobs are left? An interesting subcategory — visible in the TIME piece through nurse Hannah Drummond — are people with specific concerns about AI implementation in their workplace. Drummond helped nurses at 17 HCA hospital facilities win AI protections in their contract, including a provision requiring hospitals to give nurses a say in how new tech related to patient care is implemented. She doesn't want to ban AI from hospitals — she wants strict controls, pointing out that everything reaching patients in healthcare has gone through rigorous testing.

Category 9: The Big Tech Haters
Many are not convinced we're better off because of the internet, so they have a hard time accepting that all technology is inherently progress.
Multiple flavors: tech billionaires as partisan villains (aided by tech's shift toward Trump), general concerns about big tech power, and — perhaps the most explanatory — people who look back 20 years on social media and believe the world is worse for it. Matt Yglesias recently wrote that "all discussions about AI happen in the shadow of the tremendous and very sincere optimism about the cultural impact of social media that existed 15-25 years ago."

The Caustic Mix — and the Case for Optimism
When I see this TIME magazine piece, I do not see a bunch of opponents. I just see opportunity.
The current environment is shaped by: post-social-media disillusionment, legitimate concerns about big tech power, a "vibecession" where essentials cost more even as macro numbers improve, and political division that makes everything partisan. And then the leaders of the AI industry make it worse — Sam Altman compared human development to model training ("it takes a lot of energy to train a human"), prompting context engineer Marats and Coil to respond: "The CEO of the most visible AI company should not frame humans as inefficient compute units." But crucially, the political discourse around AI has not hardened — not from a partisan perspective, and not in terms of policy. The vast majority of people sit in the middle, trying to make sense of what this means for them, their families, and their communities. The more we address real issues, even incrementally, the more we might shift emerging resistance into cautious optimism.

ALSO REFERENCED

→ Nate Silver (X) "If AI produces unprecedented levels of disruption on timescales an order of magnitude faster than anything in human history, it's going to be an unprecedented political fight — and the timelines line up at the 2028 US election"
→ Joe Weisenthal (X) "I haven't heard anyone in the AI world credibly articulate why the average person should assume it will make their life better. Typically, they say the opposite."
→ Ethan Mollick: "When imagining backlash, people think of Dune's Butlerian Jihad or Luddites, but what those fights actually looked like were about regulation, redistribution, nationalization, unions, and safety nets"
→ Andrew Curran: "After three years, public anti-AI sentiment in the West is now at its highest point"
→ Liron Shapiro's DoomDebates podcast and the AI Safety Memes account on X (accessible entry points to the safety community)