Pro-Worker AI

March 13, 2026 · Episode Links & Takeaways

HEADLINES

Meta's Avocado Model Delayed

Meta's next frontier model, code-named Avocado, has been delayed until at least May after internal benchmarks showed shortfalls in reasoning, coding, and writing — essentially every major category for modern LLMs. The model apparently outperformed Gemini 2.5 but couldn't match Gemini 3. Perhaps part of the issue is the long development time. Meta has been working on this for almost nine months, and the goalposts shifted dramatically during that time. Meta's official statement put a positive spin on things, but it sits awkwardly alongside reports that leadership is even considering licensing Gemini as a stopgap.

xAI Poaches from Cursor and Keeps Losing Cofounders

xAI has hired two senior product leaders from Cursor — Andrew Milich and Jason Ginsberg, who will report directly to Musk — as the company scrambles to catch up on coding. The move comes as Musk himself acknowledged xAI is behind, saying at a conference that he expects to "catch up and exceed our competitors" by mid-year. Meanwhile, the cofounder exodus continues: Zihang Dai left earlier this week and Guodong Zhang has told colleagues he's leaving, bringing total cofounder departures to six this year with only three of the twelve remaining. Musk's response on Thursday: xAI "was not built right the first time around, so is being rebuilt from the foundations up. Same thing happened with Tesla."

Cursor Raising at $50B — Going It Alone

Cursor is in talks to raise at a $50B valuation, nearly doubling their $29.3B valuation from November. With Cursor having doubled ARR to $2B since that last round, the trajectory makes sense — but the more significant signal is what this raise means strategically. This is a fork-in-the-road moment: Cursor is choosing to compete for the long haul rather than pursue an acquisition. CEO Michael Truell told employees at an all-hands in January that it's "war time," meaning a product overhaul and an ambitious push to train their own state-of-the-art coding model.

Anthropic in Talks With Blackstone on AI Consulting Venture

Anthropic is in discussions with Blackstone and other PE firms to launch a dedicated AI consulting venture — essentially a firm to sell Anthropic's technology to corporate customers at scale. The genesis was Blackstone wanting help deploying AI across their hundreds of portfolio companies. Alas, the Pentagon standoff has reportedly delayed talks, with Blackstone CEO Stephen Schwarzman concerned about announcing a new partnership while Anthropic is mired in conflict with the administration. What this story really points to is the broader challenge: enterprises are lagging on implementation, and it's going to take a massive, sustained deployment of actual human bodies to do the internal work. Expect huge expansions in forward-deployed engineering, new consulting partnerships, and more ventures like this all at once.

81% of Doctors Now Using AI — and Almost None for Diagnosis

A new American Medical Association survey found that 81% of doctors now use AI professionally, more than double the rate from when the AMA first gathered this data in 2023. The leading use cases are summarizing medical research, generating discharge instructions, and documenting appointments. Only 17% are using AI for anything close to actual diagnosis. The AMA has officially adopted "augmented intelligence" as their preferred term, and the data backs up the framing: this is doctors using AI to eliminate administrative burden, not replace clinical judgment.

Sam Altman: "The Next Few Years Are Going to Be a Painful Adjustment"

Speaking at a BlackRock conference, Sam Altman said the fundamental business of OpenAI is selling tokens, with intelligence eventually becoming a utility like electricity or water. He said “too cheap to meter is still the goal” but explored the idea that not building enough infrastructure would mean high prices or rationing. On AGI, he said the term has lost all meaning and is instead watching for two milestones: when the majority of the world's intelligence is inside data centers (possibly by 2028), and when leading scientists, CEOs, and political leaders can no longer do their jobs without AI. On jobs: "I'm not a long-term jobs doomer. I think we will figure out new things to do. But I think the next few years are going to be a painful adjustment."

MAIN STORY

Pro-Worker AI

There's a lot of chatter right now about AI-related job displacement — and the discourse tends to collapse into either denial or doom. This episode covers some of the more serious and constructive thinking happening on the subject, from Atlassian's layoffs and what's actually driving them, to new research from Anthropic and MIT, to an important policy framework from a former Commerce Secretary. The point isn't to be naively optimistic — it's that there are genuinely different kinds of AI, and the mix of what gets built is not predetermined.

Atlassian Cuts 1,600 Workers

Whether the true cause is AI genuinely replacing workers or the SaaSpocalypse breaking the financial model of unprofitable SaaS, the causality runs back to AI disruption either way — either directly through worker replacement or indirectly through market re-rating.
Atlassian CEO Mike Cannon-Brookes announced the layoffs citing AI adaptation, despite reporting 25% revenue growth. Notably, 900 of the 1,600 cuts are in software R&D, and the CTO is stepping down. Bucco Capital's counter-read, written before the announcement, is worth sitting with: these companies have had near-zero free cash flow for years, and "layoffs are unfortunately the only true answer. They're coming. They will be credited to AI and that will be air cover for the real problem." Both things can be true simultaneously.

Anthropic's Labor Market Research

The Capabilities Overhang in Empirical Form
The chart that's been floating around showing theoretical vs. observed AI exposure to entire job categories — like management and business finance — is striking precisely because of the gap between what AI could already do and what's actually being done with it. Anthropic's new paper introduces a measure called "observed exposure" combining theoretical LLM capability with real-world usage data. They found no detectable unemployment effect yet, but the canary-in-the-coal-mine signal — a slowdown in hiring of young workers into exposed roles — does appear to be emerging.

Gina Raimondo in the NYT

Framework That Isn't Doom
The New York Times editors titled this piece "America Cannot Withstand the Economic Shock That's Coming," which Raimondo did not write and does not reflect what she actually argued — a pointed example of how media incentives shape the AI discourse in a consistently more pessimistic direction than the underlying thinking. Former Commerce Secretary Raimondo argues that an unemployment crisis isn't inevitable, but avoiding it requires a new "grand bargain" between the public and private sectors. The core idea: businesses are better positioned to see which skills are emerging, so they should take the lead on real-time insights about hiring and technology adoption, with government investing in training infrastructure and safety nets. Her most interesting specific proposal: employer tax credits tied to on-the-job training, and state-level pilot programs that reward worker retention, penalize layoffs, and incentivize reinvesting AI-driven savings into job creation.

The ECB Study

A European Central Bank study of 5,000 Eurozone firms found that companies making significant use of AI are about 4% more likely to hire additional staff — the opposite of what the automation-displaces-labor thesis predicts. The Washington Post editorial board notes that most Americans remain significantly more pessimistic about AI and jobs than the data warrants: 63% expect AI to decrease jobs, while only 7% expect it to increase them. The ECB study is a meaningful counter-data point that doesn't get nearly enough airtime.

MIT Paper: "Building Pro-Worker Artificial Intelligence"

The Taxonomy That Changes the Conversation
This is the paper the episode is named after, and it's worth reading. The core insight is that not all technological change is the same — and that the assumption AI is inherently automation technology is just wrong. MIT economists Daron Acemoglu, David Autor, and Simon Johnson break down five categories of technological change — labor-augmenting, capital-augmenting, automation, new task-creating, and expertise-leveling — and evaluate each on three dimensions: labor productivity, value of human expertise, and labor share of national income. Only new task-creating technologies are unambiguously pro-worker. Their main argument: the market is currently not capitalizing on pro-worker AI opportunities, with development overwhelmingly focused on task automation and AGI development. They give examples of genuinely pro-worker AI already in the field: an electrician's assistant that helps workers troubleshoot using uploaded photos and prior case data, keeping the worker in the loop rather than replacing their judgment. The paper also pushes back on the idea that automation has always eroded labor share — it hasn't, historically. Labor share rose for the first eight decades of the 20th century, and rich, heavily automated countries have higher labor shares than less automated ones.