How Headless Agents Will Change Work

April 23, 2026 · Episode Links & Takeaways

HEADLINES

The Compute Race Heats Up

OpenAI has tripled its medium-term compute goals, now targeting 30 gigawatts by 2030 — up from the 10 gigawatt target announced with Stargate at the start of 2025. To put that in perspective, 30 gigawatts is roughly peak power demand for all of New York State, and Epoch AI estimated that's the entire current global AI data center capacity. The rise of agentic work this year has created a massive inference crunch, and most observers believe Anthropic is straining under new demand — with OpenAI positioning itself as the startup with ample capacity to accommodate any Claude refugees. Meanwhile, SemiAnalysis flagged a deeper bottleneck in energy supply, with GE Vernova — one of the only suppliers of gas turbines for co-located power generation — reporting a blowout earnings beat, new orders up 71%, and a backlog of $163B against just $45B in projected annual revenue. Their entire capacity through the decade may already be spoken for.

Google's New Inference Chips

Google unveiled its eighth-generation TPUs — and for the first time, it's two distinct chips: one optimized for training, one for inference. The split reflects a broader industry shift as inference demand has massively outstripped training demand over the past year. Google Cloud CEO Thomas Kurian confirmed that customers are increasingly specializing their AI infrastructure deployments. NVIDIA is heading the same direction with Groq chips in its upcoming Rubin generation, and OpenAI recently signed a deal with Cerebras for inference-only silicon. Semiconductor analyst Patrick Moorhead's read: this isn't Google taking on NVIDIA — it's Google building optionality for its own services.

Google Says 75% of Its Code Is Now AI-Generated

Sundar Pichai pushed back on recent reporting that Google is falling behind on AI coding, correcting a February stat from 50% to 75% — with that figure representing all new code now AI-generated and approved by engineers. The context: reporting surfaced earlier this week that Claude Code creator Boris Cherny has said essentially 100% of Anthropic's code is written by agents, and that Sergey Brin is leading an internal strike team to close the gap. Pichai also noted a recent complex code migration completed six times faster than would have been possible a year ago with humans and agents working together. The speed of this organizational transformation is notable.

OpenAI Privacy Filter: A New Category of Micro-Model?

OpenAI released an open-weights model called Privacy Filter — a 1.5 billion parameter model built specifically to detect and redact PII from text, entirely locally, so no data leaves your system. It hits 97% on a privacy filtering benchmark. The more interesting question is whether this signals a new direction: releasing small, task-specific open-source models that the labs have built for internal use and that don't have major commercial value on their own. It's worth watching whether this becomes a pattern.

Mistral Might Join the xAI Superteam

Business Insider reports that Mistral could join SpaceX and Cursor in a three-way partnership with xAI, with sources saying Musk floated the idea as a way to challenge Anthropic. xAI has already pulled in a founding team member from Mistral — via Thinking Machines Lab — to lead pretraining. The pairing makes strategic sense: Mistral has strong research talent but can no longer compete on compute, and xAI has Colossus but has lost all eleven non-Musk cofounders. As always, the Elon Rorschach test applies — true believers see an AI Avengers, skeptics see a desperate play. Nothing is confirmed.

MAIN STORY

Why Headless Agents Will Change Software and Work

A wave of enterprise agent announcements this week — from Salesforce, OpenAI, Microsoft, and Google — all point toward the same structural shift: software is going headless. Agents don't log in, click through UIs, or navigate dashboards. They call APIs, invoke MCP tools, and run CLI commands. That changes everything about how enterprise software gets built, priced, and used — and this week made clear that every major platform is now racing to get ready for an agent as a first-class user.

SAAS GOES HEADLESS

Salesforce Headless 360
"No browser required. Our API is the UI."
Salesforce's announcement kicked off the week's conversation. The whole concept is decoupling agentic work from human interfaces: the entire Salesforce, Agentforce, and Slack stack is now exposed via API, MCP, and CLI. Benioff's framing was blunt — if your platform requires humans to click through UIs to make progress, it isn't ready for the agentic enterprise. Custom agents on Slack have already grown 300% since January, and co-founder Parker Harris summed up the ambition: "Why should you ever log into Salesforce again?"

OpenAI Workspace Agents
"GPTs on steroids. Like mini OpenClaws."
OpenAI announced Workspace Agents — a successor to custom GPTs that lets organizations build once and deploy agents that handle complex, long-running tasks across the enterprise. Agents are cloud-based, run on schedules, integrate with tools like Slack and Salesforce, can store memory across sessions, and are powered by Codex. OpenAI gave five worked examples, from a software reviewer to a third-party risk manager, and made them buildable in minutes without any special technical knowledge. Aaron Levie called it "probably the biggest news yet in software going headless," noting that agents bring knowledge work automation to the masses.

Microsoft Hosted Agents
Anthropic's managed agents model, now from Microsoft.
Microsoft's answer is Hosted Agents in Foundry Agent Service — essentially their version of Anthropic's recently announced managed agents product. Each agent gets its own dedicated sandbox with a persistent file system, durable state, built-in identity, and governance. The multi-vendor angle is notable: Microsoft explicitly wrote that "unlike platforms that force one model and one harness, Foundry is multi-model and multi-harness by design" — and listed OpenAI, Anthropic, Meta, and Mistral as supported models. A time traveler from the nineties would be gobsmacked.

Google Gemini Enterprise
Not AI features — an operating system for the agentic enterprise.
Google Cloud Next was a kitchen-sink agentic rollout. The central release is Gemini Enterprise — functionally a rebrand and relaunch of Vertex AI with new governance and security features for enterprise agent deployments. Google Cloud CEO Thomas Kurian said the primary use case of Vertex AI has already shifted from "old-style machine learning" to custom AI agents. The supporting pieces include Data Agent Kit for building data agents, Knowledge Catalog as a unified context engine, and Gemini for Google Slides. Silicon Angle's framing cuts through the noise: model performance is converging toward commodity, inference is getting cheaper, so the control layer — the agentic OS — becomes the new land grab. "From systems of record to systems of engagement to systems of execution. The third — systems that actually do the work — is the prize Google is reaching for."

What It All Means: The Business Model Wars
Seats for people, consumption for agents.
The deeper debate is who captures the new value. Aaron Levie's take: software has been massively underutilized because it's constrained by how much humans can do in a day — agents work 24/7 in parallel, so the same platforms become far more valuable, not less. Agents reviewing every contract instead of one at a time; running ten campaigns instead of one. The business model implication, per JB VibeMarketer, is that per-seat pricing is on borrowed time — agents don't log in, they make API calls. Aakash Gupta adds the bearish counterpoint: once a horizontal agent runs the same workflow across Salesforce, Jira, and Slack, why pay $30/seat for AI inside five separate products? The winners, he argues, are whoever owns the data access layer. Dharmesh Shah has the most nuanced take: going headless isn't just wrapping APIs in an MCP server — the companies that win will figure out the ergonomics of how agents actually want to use their platforms.