- The AI Daily Brief
- Posts
- OpenAI Proposes a New Deal
OpenAI Proposes a New Deal
April 7, 2026 · Episode Links & Takeaways
HEADLINES
Anthropic Hits $30B ARR — and May Have Just Passed OpenAI
Anthropic quietly buried a significant disclosure inside a blog post about their new compute deal: they've reached $30B in annualized revenue — a 3x increase since the end of last year and up 58% since just the end of February. That's the fastest revenue growth at this scale in history. For context, Fleeting Bits calculated the annualized growth rate at 9,700%. The best comparable is NVIDIA, which grew at a 1,240% annualized rate during its single best quarter ever. According to the latest numbers we have from OpenAI, that suggests Anthropic may have just passed them on revenue — though both companies calculate things differently, and you can bet OpenAI will weigh in quickly if that's not accurate.
This all lands as both companies face increased IPO scrutiny. The Wall Street Journal published a deep dive into their financials sourced from recent fundraising disclosures. The headline: OpenAI expects to spend $30B on model training this year (triple last year), forecasts $85B losses by 2028, and doesn't hit cashflow positive until 2030. Anthropic's training costs are more modest and they're forecasting a traditional profit by 2028. Both companies are offering alternate accounting that strips out training costs to show profitability — which one observer summarized as "equivalent to running a passenger airline except you need to replace your jets every six months." Wall Street's default narrative is already taking shape: these companies will burn through a giant amount of cash and are counting on IPO investors to buoy them. That's the framing both will need to fight.
The Information Anthropic Says It's Topped $30 Billion in Annualized Revenue
WSJ An Inside Look at OpenAI and Anthropic's Finances Ahead of Their IPOs
Ram Ahluwalia (X) Incredibly profitable....if you just strip out the training and inference costs
Fleeting Bits (X) Unprecedented reveunue growth beyond comparison
John Arnold (X) Hard to believe Anthropic’s growth in the past 18 months
Anthropic's Massive New Compute Deal with Google and Broadcom
On the back of soaring usage, Anthropic has signed a major new compute partnership: 3.5 gigawatts of capacity with Google and Broadcom, set to come online from 2027. Enterprise spend is also skyrocketing — the number of enterprise customers with annual spends above $1M doubled from 500 to 1,000 in just two months. For Anthropic the deal is necessary given their capacity constraints. But for Google and Broadcom it's arguably even more significant: Google set out to build a commercial TPU business and, in a single deal, has now anchored a multi-billion dollar chip business around one customer. Broadcom has guaranteed demand as long as Anthropic grows.
Anthropic Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute
WSJ Broadcom to Supply AI Chips to Google, Computing Capacity to Anthropic in Expanded Collaboration
Google Productizes Gemma 4 with an Offline AI Dictation App
Less than a week after releasing Gemma 4, Google has already shipped a commercial product built on it: Google AI Edge Eloquent, an offline AI dictation app for iOS. Like WisprFlow, it filters filler words and cleans up phrasing — but runs entirely on-device with no internet connection required. The more interesting implication is what this says about Gemma 4 as a family: this isn't a research project, it's a commercially viable model Google is building around. The developer response has been enthusiastic — 2 million downloads in the first week, and something that went under the radar: even the tiny 2B version has strong enough agentic performance to query Wikipedia using agent skills while running on an iPhone. Still very early innings, but Gemma 4 is starting to feel like a genuine breakout moment for local models.
Techcrunch Google quietly launched an AI dictation app that works offline
The Verge Google has launched a free, offline AI dictation app that will automatically polish your speech
Philipp Schmid (X) Gemma 4 running agent skills on an iPhone
Meta's Model Is Coming — and It Will Have an Open Source Version
Axios published new details on Meta's forthcoming Avocado model, citing sources close to AI CEO Alexandr Wang. Meta plans to keep some parts of the model proprietary on initial release due to safety concerns — but an open source version is coming. Wang reportedly views Meta as a democratizing force ensuring there's a US-trained option for open source developers, while seeing OpenAI and Anthropic as increasingly focused on enterprise and government. The safety concerns could actually be a positive signal: Avocado was delayed in early March because it couldn't match Gemini 3.0 on benchmarks, so if safety is now the concern, capability may have improved significantly with another month of post-training.
Axios Scoop: Meta to open source versions of its next AI models
NYT Meta Delays Rollout of New A.I. Model After Performance Concerns
Meta Engineers Are Tokenmaxxing on Claude — Competitively
While Meta's own model approaches release, its engineers are burning through Claude tokens at an extraordinary rate. The Information reports that Meta has an internal leaderboard called Claudenomics tracking the top 250 token users among its 85,000 employees — with top performers earning ranks like "Session Immortal" or "Token Legend." The flaw in this is immediately obvious: the Information also reports some engineers are just running large numbers of parallel agents to rip through tokens as fast as possible, not to be productive. The culture is being set from the top — Andrew Bosworth called one of his top engineers spending the equivalent of their salary on tokens a "10x efficiency boost" and said "No limit." Joe Weisenthal compared it to Mao requiring peasants to smelt steel in their backyards during the Great Leap Forward — tons of useless low-grade steel. The counterpoint from MetaCritic Capital: the cost of tokenmaxxing is small because tokenmaxxing is genuinely hard, and 98% of corporations would be better off doing it than not over the next 18 months.
The Information Meta Employees Vie for AI 'Token Legend' Status
NYT More! More! More! Tech Workers Max Out Their A.I. Use.
Business Insider Silicon Valley is buzzing about this new idea: AI compute as compensation
Joe Weisenthal (X) Real backyard steel furnaces vibe — on tokenmaxxing as a productivity metric
MetaCritic Capital (X) The cost of tokenmaxxing is small because tokenmaxxing is extremely hard
MAIN STORY
OpenAI Proposes a New Deal
OpenAI released a policy document called "Industrial Policy for the Intelligence Age" — framed not as a comprehensive policy statement but as a nudge to start conversations about important topics. It arrives at the convergence of two pressures: the growing sense inside the labs that the next set of models represents a very significant leap, and the continued deterioration of public sentiment on AI. Fifty-five percent of Americans now believe AI will do more harm than good — a majority for the first time — and 70% believe AI will reduce job opportunities, with only 7% believing it will increase them. AI now has worse sentiment than ICE. Into that environment, this document lands. It needs to be judged two ways: as a PR exercise, and on the merits of the policies themselves.
OpenAI Industrial policy for the Intelligence Age
Axios Behind the Curtain: Sam's superintelligence New Deal
Quinnipiac Poll The Age of Artificial Intelligence: Americans' AI Use Increases While Views On It Sour
Chamath Palihapitiya (X) AI has worse public sentiment than ICE
The PR Problem
A document without a clear home or purpose.
To be transparent: I very much dislike this document. It sits in an uncanny valley — too technocratic to work as PR (starting with the narcolepsy-inducing title), not robust enough to actually advance any of the policy positions. But even setting that aside, the deeper communication problem this document exemplifies is that the AI industry is fundamentally unwilling to spend any time articulating why it deserves to exist. Every document like this, every public statement from Dario or Sam, is so focused on affirming negatives and validating concerns that almost no time is spent explaining how this is actually going to make the world better. The ratio is backwards — like a pharmaceutical ad that spends three-quarters of its time on side effects. And when companies respond to legitimate concerns about AI risks by saying "the benefits will far outweigh the challenges" and then immediately pivot to how clear-eyed they are about the risks, people are left to assume the honest answer is that this is happening because it's going to make some people rich. That's the default in the absence of a better answer — and it makes people angry.
Daniel Jeffries (X) Just stop with this stuff — just give us models and let people adapt
Chayenne Zhao (X) We're in the "extremely capable tool" era, not the "new social contract" era
Bucco Capital (X) Every tech executive has AI psychosis
Aaron Levie (X) The worst thing you can do is just dabble with AI — you almost have to develop psychosis, then get to the other side
Worker Perspectives and the New Deal Parallel
"OpenAI doesn't use the word union."
The document calls for giving workers a formal voice in the AI transition. This is genuinely important — and also reveals the document's central problem, identified sharply by Will Manidis in his response essay "No New Deal for OpenAI." The actual New Deal wasn't some benevolent meeting between capital and labor facilitated by FDR. It was the byproduct of decades of political violence, a labor movement willing to fight and literally die for change, and a leader with a mandate the likes of which no one in American politics has had for a very long time. What's happening now — and what will need to happen — isn't a policy that can be enacted. It's going to require a total new labor movement. OpenAI's document doesn't engage with that reality.
Will Manidis (X) Article: No "New Deal" for OpenAI
AI-First Entrepreneurs and the Right to AI
"For many, the only secure future will be the one they secure for themselves."
On entrepreneurship: the critique that OpenAI is telling displaced workers to just go start businesses is a misreading. The actual question is what policy interventions could increase the successful small business rate by 50% or 100% — and in a fast-adapting future, pro-entrepreneurial policy is one part of a much larger toolkit. On the right to AI: access to AI needs to be treated as foundational for participation in the modern economy. But access without agency is meaningless. Companies are currently spending more than 12 times as much on AI infrastructure as they are on investing in people's capability to use these tools — even within companies with direct financial incentives to upskill their workforce. What's needed is a Marshall Plan for education in the new economy. Without that, any "right to AI" is just a pretty notion on paper.
Tax Reform and a Public Wealth Fund
"Something has to give" on taxation — and it will find strange bedfellows.
If the balance of the economy shifts from labor to capital, there has to be some commensurate change to taxation. Higher capital gains taxes, automation taxes — some version of this seems inevitable and will likely produce very strange bedfellows politically. OpenAI also proposes a public wealth fund seeded by AI companies and government, with returns distributed directly to citizens. I'm more skeptical of this than many. Not because it's bad — I think it would be good to have people rooting for these companies' success — but because the central challenge of American politics is that people don't want the average of what everyone has. A wealth fund could end up being very exciting to write about while not moving the needle on actual public sentiment.
Efficiency Dividends and Adaptive Safety Nets
"There is going to be some redistribution of AI-generated wealth."
Two ideas here worth taking seriously. First, efficiency dividends — reinvesting AI's realized value back into people's lives. Rather than pledging not to raise electricity prices, actively make people's lives cheaper. Portable benefits (healthcare, retirement, skills training) not tied to a single employer could be funded this way. Second, adaptive safety nets: investing in much better direct measurement of how AI is actually impacting work, wages, and job quality, then using that data to inform automated and dynamic social safety net programs. Holding aside the AI context, this is a genuinely interesting idea — using modern tools to make targeted, specific interventions rather than the big cumbersome programs that buckle under their own weight. The problem is none of this is in the document as an actual commitment. Will Manidis put it directly: the document proposes that frontier AI companies adopt public benefit governance. OpenAI could reinstate the profit caps it dismantled six months ago. None of these things are in the document. The only things in the document are a workshop, fellowships paid in the company's own product, and an email address that routes to no one.
Alexander McCoy (X) Great Idea, what is OpenAI doing about it?