What People Really Want From AI

March 19, 2026 · Episode Links & Takeaways

HEADLINES

AI Brings Val Kilmer Back for One Last Role

AI has brought Val Kilmer back from the grave to star in "As Deep as the Grave" — and it's much harder to paint this with the cynical brush many will reach for. Kilmer was cast in 2020 but was too ill to shoot by the time production rolled. The entire performance was created using AI tools with full permission from his estate and the active support of his children. Director Coerte Voorhees: "His family kept saying how important they thought this movie was and that Val really wanted to be a part of this." The film uses Kilmer's actual voice — damaged by tracheal surgery — which happened to suit his character, a priest suffering from tuberculosis. This isn't Kilmer's first use of AI either: he previously used it to recreate his voice for his Iceman cameo in Top Gun Maverick, saying at the time it was "an incredibly special gift."

Microsoft Restructures Copilot

Microsoft is combining its consumer and commercial Copilot teams under a single leader — Jacob Andreou, now EVP of Copilot reporting directly to Satya Nadella — while freeing Mustafa Suleyman to focus entirely on model training and superintelligence. Suleyman's take: "Most of the future value is going to accrue to the model layer." Tom Warren at The Verge put it plainly: it's hard not to read this as an admission that separating consumer and commercial Copilot has failed. That said, Microsoft is far from alone — Google, Meta, Alibaba, and OpenAI have all gone through similar AI org resets in the past year. Given how many enterprise users are effectively forced into Copilot, this restructuring matters.

Claude Code and Cowork Are Converging

Just one day after launching Cowork Dispatch, Anthropic updated it to support Claude Code sessions as well. The line between Cowork and Claude Code is getting significantly blurrier — they're now running on the same underlying primitives. The question is how much Anthropic will actively consolidate them versus letting users route around the division with their own setups, which power users are already doing. There's a strong argument that Claude Cowork — essentially Claude Code for all other knowledge workers — might be the most important product line in Anthropic's near-term future.

MAIN STORY

What People Really Want From AI

As the conversation about AI gets more heightened, there's a temptation to reduce people to simple clear attitudes — for or against — when for the vast majority of people, the reality is complicated and nuanced. Anthropic's new study of nearly 81,000 people across 159 countries and 70 languages is a direct antidote to that flattening. The headline finding: hope and alarm didn't divide people into camps so much as coexist as tensions within each person. What people want from AI and what they fear from it turn out to be tightly bound.

ANTHROPOLOGY OF AI USERS

What People Hope For
"Using AI to automate emails became a desire to spend more time with family."
The top category of hope was professional excellence at 18.8%, but when Anthropic probed the underlying desire behind productivity goals, the personal quickly surfaced. Personal transformation was number two at 13.7%, life management at 13.5%, and time freedom at 11.1%. A white collar worker in Colombia: "With AI, I can be more efficient at work. Last Tuesday, it allowed me to cook with my mother instead of finishing tasks." A freelancer in Japan: "I want to use less brain power on client problems and have more time to read more books." Anthropic identified three meta-clusters: roughly a third of people want AI to make room for life (more time, money, mental bandwidth); about a quarter want AI to help them do better, more fulfilling work; about a fifth want to become someone better through learning or healing. The nine clusters, they write, are "underpinned by recognizably human desires."

What People Fear
"The threat isn't that AI becomes too powerful — it's that it becomes too timid."
Concerns were more varied and more concrete than hopes. Unreliability topped the list at 26.7%, followed by jobs and economy at 22.3%, loss of autonomy at 21.9%, and cognitive atrophy at 16.3%. Notably absent from the top concerns: most of what dominates media coverage. Copyright concerns represented only 4% of responses, environmental costs 4%, and harm to children 3%. Existential risk came in at 6.7%. One underrepresented concern that stands out: over-restriction — excessive safety measures and paternalistic filtering blocking legitimate use. One US respondent: "The threat isn't that AI becomes too powerful, it's that AI becomes too timid, too smooth, optimized for avoiding discomfort." Eleven percent of people expressed no concern at all — and contrary to expectation, they weren't accelerationists. They simply viewed AI as a neutral tool like electricity, and were confident that problems could be solved through adaptation.

The Light and Shade
"What people want from AI and what they fear are tightly bound."
Anthropic identified five recurring tensions where benefits and harms were directly coupled. The tension between learning and cognitive dependency. Between finding emotional solace in AI and worrying it replaces human connection. Between saving time and the treadmill speeding up on other tasks. Between dreams of economic freedom and fears of displacement. Crucially, in most of these tensions, the benefit side is more grounded in actual experience while the harm side leans hypothetical. 33% mentioned learning benefits; 17% worried about cognitive atrophy — and 91% of those who cited learning benefits had actually experienced them, versus 46% of those fearing atrophy who had seen it firsthand. The strongest co-occurrence of both light and shade in the same person was around emotional support — triple the baseline rate. A South Korean respondent: "My relationship with a friend became strained, and I talked more with Claude then because Claude understood my thoughts. But it was a stupid choice. I should have talked with that friend, not Claude."

Who Is Actually Benefiting Economically
Freelance creatives are the most exposed middle — AI is "both their tool and their competitor."
Economic benefits are accruing disproportionately to the nimble: independent workers, entrepreneurs, and people with side projects reported real economic empowerment at more than triple the rate of institutional employees. Employees with side projects benefited most, with 58% reporting economic gains. Freelance creatives were the group where upside and downside most nearly cancelled out — 23% had lived the benefits, 17% had lived the downside. Developing and lower-income countries were also more likely to see AI as a capital bypass mechanism — a way to create opportunity that wouldn't otherwise exist — while wealthier regions focused more on managing the complexity of life.

The Methodology Debate
"Intellectual nimbyism masquerading as a methodology critique."
The study itself was conducted by Anthropic's interviewer — a version of Claude specifically designed for conversational research — making it, Anthropic believes, the largest qualitative study ever conducted. On one side: the methodology is genuinely impressive. Using Claude as the interviewer at that scale removes interviewer bias, holds a consistent structure across 70 languages, and achieves access to places no field team could reach. On the other: Berkeley Haas professor Abhishek Nagaraj raised a fair point — the sample is Claude users, who likely differ from the average person, and that should be acknowledged. The more pernicious version of the critique, from anonymous account Librarian Shipwreck, argues AI users' opinions on AI are inherently skewed and shouldn't be used to inform broader conclusions. The problem with that framing is its implicit premise: that the opinions of AI users are less legitimate when it comes to discussing AI than those of non-users. In a world where billions of people use these tools weekly, that's intellectual nimbyism masquerading as methodology. Concerns and hopes coexist in most users — that's not a monolithic pro-AI constituency, it's a nuanced one.