- The AI Daily Brief
- Posts
- AI's Great Divergence
AI's Great Divergence
April 16, 2026 · Episode Links & Takeaways
HEADLINES
Allbirds Becomes an AI Company — Up 875%
The most absurd AI pivot yet: beloved-then-cratered sneaker company Allbirds, which sold its entire brand and IP for $39M earlier this month and was left as a largely valueless shell, has announced it will rebrand as "NewBird AI" and raise $50M to become an AI neocloud provider. The stock soared as much as 875% on the news. Matt Levine summed it up: "Sure, Allbirds is pivoting its business to AI compute infrastructure. That seems like a competitive and capital-intensive business in which Allbirds has no obvious expertise. The other level is that Allbirds is pivoting its stock to being an AI meme stock. That definitely worked out." The Wall Street Journal notes that $50M gets you nowhere near the tens of billions that real neoclouds are spending. This one probably doesn't need a lot of follow-up.
Bloomberg Allbirds Soars 582% After Sneaker Firm Rebrands as AI Stock
WSJ For Its Next Act, Allbirds Makes an Unlikely Pivot From Shoes to AI
NYT Sneaker Company Allbirds Plans to Pivot to A.I. Yes, A.I.
Bloomberg Opinion AIBirds — Matt Levine
OpenAI Updates Agents SDK for Enterprise
OpenAI has shipped a significant update to their Agents SDK, separating the harness from the compute layer — the same architectural move Anthropic made with Managed Agents. Both companies independently arrived at the same reasoning: credentials shouldn't live where model-generated code runs, losing a sandbox shouldn't kill a session, and you need to be able to spin up many sandboxes per agent as needed. The SDK now includes native sandbox integration, improved file access, memory and compaction, and full open-source access to the harness so enterprises can inspect and customize it. Steve Coffey from OpenAI: "This is the direction I'm excited about for agents — open harnesses that give you the flexibility to deploy your agents at scale, with your own data, on your own terms."
OpenAI The next evolution of the Agents SDK
TechCrunch OpenAI updates its Agents SDK to help enterprises build safer, more capable agents
Steve Coffey (X) Open harnesses that give you the flexibility to deploy your agents at scale, with your own data
Armaan Sidhu (X) This isn't for consumer chatbots. This is for enterprise deployments where you need to let an AI loose on real systems without letting it break things.
OpenAI Shifts ChatGPT Ads to Pay-Per-Click
OpenAI is switching their ad model from cost-per-view to cost-per-click, addressing the core complaint from advertisers who couldn't track performance. They're also exploring action-based pricing — charging when a user actually makes a purchase — which would bring them closer to Google's model. The goal is to de-risk experimenting on this new surface by aligning payment with actual outcomes.
The Information OpenAI Plans New Pricing for ChatGPT Ads, Explores Other Upgrades
The Information OpenAI Prepares to Launch Cost-Per-Click Ads In Coming Days
The Manus Investigation Is Splitting China's Startup Scene
The CCP's probe into Meta's acquisition of Manus — where two co-founders were reportedly told they couldn't leave China until the investigation concludes — has sent shockwaves through the Chinese startup ecosystem. The Information's China reporter Jing Yang finds that founders are now being forced to pick a side: build for Chinese acquirers, or leave the country and don't look back. A tacit truce between Beijing and Shenzhen has quietly ended, and the message has been received clearly despite no formal policy change. Notably, many founders aren't abandoning international ambitions entirely — they're just pivoting to Singapore to avoid any ties to China.
The Information China's Probe of Meta's Manus Purchase Sends Startups Scrambling
Reuters Chinese AI startup StepFun to unwind offshore structure to pave way for IPO
Jensen: China Already Has the Chips — We Need Dialogue
Jensen Huang appeared on the Dwarkesh podcast this week and pushed back hard on the premise that export controls are keeping China from training Mythos-level AI. His argument: Mythos was trained on "fairly mundane capacity" by a "fairly exceptional company," and that capacity already exists abundantly in China. He went further, calling for research dialogue between US and Chinese AI scientists and warning against bifurcating the AI ecosystem into US and Chinese stacks. For many, this read as Jensen talking his book. Ed Elson's more nuanced counter — which is closer to the right frame — is that the question was never whether China achieves Mythos-level AI (they will), but whether they'll use it to try to destroy America. The interview is worth watching; if nothing else, it gave us a meme for the ages: "You're not talking to someone who woke up a loser. That loser premise makes no sense."
Dwarkesh Podcast Jensen Huang — TPU competition, why we should sell chips to China, and Nvidia's supply chain moat
Bloomberg Nvidia's Huang Says Mythos Shows Need for US-China AI Dialogue
Gavin Baker (X) Selling B30s to China is super pro-American — the alternative is China building their own semiconductor ecosystem
Sriram (X) Your reaction to the Jensen/Dwarkesh podcast can be extrapolated directly from your beliefs about AGI timelines
HEADLINES
AI's Great Divergence
One of the big themes of 2026 is heightened stakes around everything with AI — from agents coming online, to the implications for work, to the politics of AI that follow. In all of that, greater divides are opening between people who sit in very different places relative to these changes: leaders and laggers in the corporate world, optimists and pessimists in the public sphere. Two major studies out this week put that divergence in sharp relief.
AI GAPS ALL OVER
The Stanford AI Index: Experts vs. the Public
73% of experts expect AI to help people's jobs. 23% of the public agree.
The annual Stanford HAI AI Index — 420-odd pages of comprehensive data on the state of AI in society — tells the divergence story in clear terms this year. The headline gap: when asked how AI will impact the way people do their jobs, 73% of AI experts expect a positive impact compared to just 23% of the general public. That gap shows up everywhere. On AI's impact on the economy over the next 20 years: 69% of experts are optimistic, versus 21% of US adults. On medical care: 84% of experts are positive, versus 44% of the public — the area where the public is most optimistic. On K-12 education: 61% of experts positive, 24% of adults. On elections, almost everyone agrees it will be bad: 11% of experts expect a positive impact, and only 9% of the public. Almost two-thirds of US adults believe AI will lead to fewer jobs — and perhaps surprisingly, 39% of AI experts agree. One area where there's no divergence: the performance of top US versus Chinese models, which Stanford characterizes as essentially neck and neck. And the jagged capability frontier is real — models that win gold at the International Math Olympiad still can't reliably tell time. That same jaggedness produces jagged adoption, as organizations individually figure out where AI fits and where it doesn't.
Stanford HAI The 2026 AI Index Report
IEEE Spectrum 12 Graphs That Explain the State of AI in 2026
Sources The AI industry's reputational crisis — Alex Heath
Rohan Paul (X) Summary Thread
Jim Prosser (X) The public opinion chapter is the one that matters most — the gap between experts and the public has become a gaping chasm
The PwC Study: Leaders vs. Laggers
Three-quarters of AI's economic gains are going to just 20% of companies.
PwC's annual AI performance study — interviewing over 1,200 senior executives at large, publicly listed companies — shows the same divergence pattern at the enterprise level. The liner stat: 75% of AI's economic gains are being captured by the top fifth of companies. What separates the leaders from the rest comes back to the efficiency AI versus opportunity AI distinction. Efficiency AI is about doing the same with less — using AI to reduce resource input while maintaining output. Opportunity AI is about doing more: pursuing new revenue, reinventing business models, getting into orthogonal fields, expanding R&D. Leaders were twice as likely to redesign workflows to incorporate AI rather than simply adding AI tools on top. They were two to three times more likely to use AI to identify and pursue growth opportunities. But it's not just about doing more autonomously — it's also about governance. These leaders were 1.7 times as likely to have responsible AI frameworks in place and one and a half times more likely to have cross-functional AI governance boards. Employees at leading companies were twice as likely to trust AI outputs. The financial result: companies that were most "AI fit" in PwC's research delivered AI-driven financial performance 7.2 times higher than other respondents.