The Month AI Woke Up

2 March, 2026 · Episode Links & Takeaways

AIDB USAGE PULSE SURVEY

The AI Usage Pulse survey is back for February. It was one heck of a month, with new models, OpenClaw and more.
I'd love to have as many of you who contributed to last month's Survey fill out this one as well for more longitudinal insight. As always, anyone who contributes will get the results in advance of them being widely published.
https://aidailybrief.ai/pulse-survey

HEADLINES

Anthropic vs. Pentagon: The Iran Strike Update

The conflict with the Pentagon took on a new dimension over the weekend as the US and Israel launched preemptive strikes on Iran. The Wall Street Journal reports that despite Claude being declared a supply chain risk hours earlier, Anthropic's technology was still used in the operation — to analyze intelligence, help select targets, and run battlefield simulations. There's no suggestion Claude piloted autonomous weapons, but the Pentagon did confirm this was the first deployment of autonomous LUCAS kamikaze drones in an active mission. Meanwhile, OpenAI's models were not used, because they haven't yet been approved for classified settings. Democrat Congressman Seth Moulton put it plainly: either the Pentagon used tech it called a national security risk in a live military strike, or they lied in the first place.

On Saturday, Sam Altman hosted an AMA on X to address the new DoW contract. His main point: the threat of labeling Anthropic a supply chain risk is bad for the entire industry. OpenAI then published the full text of their contract's AI red-line sections, which many observers noted doesn't actually prohibit autonomous weapons use or domestic surveillance — only uses the Pentagon deems unlawful. OpenAI's NatSec lead Katrina Mulligan pushed back, arguing that you can't simultaneously distrust the government to follow the law and also believe Anthropic's contract language would have been effective. The Atlantic reported that what ultimately drove Anthropic out of the deal was the Pentagon's insistence on using Claude to parse bulk commercial data on Americans.

OpenAI Closes $110B Round — Largest Startup Fundraise in History

Somewhat overshadowed by the Pentagon drama, OpenAI finalized the largest startup fundraising round in history on Friday morning: $110 billion at an $840 billion post-money valuation, making them the 15th most valuable company in the world and worth slightly more than JPMorgan Chase. The round is entirely from three corporate strategic partners — NVIDIA and SoftBank at $30B each, and Amazon at $50B. The Amazon deal is the headline: it expands OpenAI's AWS commitment from $38B to $138B over eight years, includes use of Trainium chips, and involves jointly developing AI models to power Amazon's consumer apps. Microsoft, notably absent as an investor, retains its revenue-sharing agreement and exclusive rights to serve stateless model calls — so they'll still take a cut of revenue flowing through AWS.

ChatGPT Hits 900M Weekly Active Users

Alongside the fundraise, OpenAI announced that ChatGPT now has 900 million weekly active users and 50 million paying subscribers — with January and February on track to be the biggest new subscriber months ever. The last reported figure was 800 million in October. OpenAI now has over 9 million paying business users, and weekly Codex users have tripled since the start of the year to 1.6 million. OpenAI framed it plainly: "We are entering a new phase where frontier AI moves from research into daily use at global scale."

MAIN STORY

The Month AI Woke Up

February 2026 was the month a cascade of different groups — AI insiders, broader knowledge workers, Wall Street, and Washington — all had their own distinct reckoning with the fact that something had fundamentally changed. The through-line was agents: models that could be given a goal, not just a task, and actually do real work autonomously. What started as an inside-baseball shift in how developers work became a culture-wide recognition, a market panic, and ultimately a constitutional fight over who gets to decide how this technology is used.

WAKE UP CALLS

AI Insiders / The Karpathy Moment
"It's hard to communicate how much programming has changed due to AI in the last two months. Not gradually and over time — specifically this last December."
Former OpenAI founder Andrej Karpathy put into words what the most engaged AI users had been feeling since the holiday break: coding agents that basically didn't work before December basically do now. The era of typing code into an editor is over. The prize is orchestration — how many agents can you have running in parallel that actually adds up to something real.

OpenClaw / The Agent That Made It Real
OpenClaw — first named ClaudeBot, then MoltBot — became the clearest, most visible face of the new era of agentic AI. It's not just that developers got excited; nearly 5,500 people signed up for Claw Camp, our self-directed learning program designed to walk non-developers through building their first agent team.
The OpenClaw homepage still says "cleans your inbox, sends emails, manages your calendar" — but that's not where users stayed. By month's end, solopreneur Ben C had used the same stack to build Pulia, an AI that autonomously runs entire online businesses, reaching a $1.25M annual run rate in just a couple of weeks.

The Labs Race to Catch Up
Once OpenClaw made agent ambition legible to the mainstream, every major lab moved to catch up — and fast.
OpenAI launched the Codex app and then hired the OpenClaw founder outright. Anthropic shipped Claude Code Remote Control and scheduled tasks in Cowork. Perplexity announced Perplexity Computer. Microsoft launched Copilot Tasks, with reports that Satya Nadella is personally using OpenClaw. Notion launched Custom Agents. The agentic turn is no longer a research direction — it's a product race.

Professionals Woke Up / The Deidre Bosa Moment
The people doing the vibe-coding in February weren't just developers — they were journalists, finance people, and anyone with a few hours and a curious streak.
CNBC's Deidre Bosa set out to build a Monday.com clone with Claude Cowork just to show viewers what the technology could do. An hour later, she had her own functional version plugged into her calendar and Gmail. Joe Weisenthal was a few weeks ahead of everyone, poking at Claude Code in January and effectively previewing the realization Wall Street would have in February.

Wall Street Woke Up / The SaaSpocalypse
February was the month Wall Street's hot new trade became dumping any stock in AI's crosshairs — and it didn't matter how tenuous the connection was.
It started at the end of January when Google's Genie 3 demo sent gaming stocks down. Then basically every time Anthropic announced a new plugin — legal, finance, COBOL, productivity — a cluster of nominally related stocks cratered. IBM had its worst single-day drop in 25 years because Anthropic published a blog about a COBOL tool that had been announced months earlier.

Citrini / The Doom Loop Report
The perfect accelerant for a jittery market was a viral research note articulating exactly what everyone feared: that AI disruption wouldn't just crater software stocks, it would trigger a broader economic catastrophe.
Citrini Research's "2028 Global Intelligence Crisis" laid out a theoretical doom loop in which white-collar job displacement triggers a consumer spending collapse, feeding a recession that undermines the very AI buildout causing it. Then Block announced it was cutting 40% of its staff — 4,000 employees — citing AI. Whether that's genuine AI displacement or the biggest AI-washing we've seen is debatable, but the environment heading into March is one where Wall Street is extremely jumpy.

Washington Woke Up / The Anthropic-Pentagon Fight
February was also the month the rest of us woke up to the complicated and inevitable power struggle between Washington and Silicon Valley over who decides how AI gets used.
Through a series of steps going back to a Venezuela raid, a negotiation broke down over Anthropic's insistence on specific red-line carve-outs around autonomous weapons and domestic mass surveillance, versus the White House's position that the standard should be "any lawful use." Trump and Hegseth declared Anthropic a supply chain risk — a designation that, if enforced, would force other government contractors to drop Anthropic entirely. As Senator Tom Tillis said, with some understatement: "Why in the hell are we having this discussion in public?"

Models & Other Notable Releases
A lot happened on the model front in February that felt secondary to everything else — which is itself a sign of how much the month contained.
Anthropic added Claude Sonnet 4.6 to complete the 4.6 suite. Google dropped Gemini 3.1 Pro, clearly flexing multimodal capabilities. Google also released Nano Banana 2 — less a new model than a meaningful functional upgrade making it faster, cheaper, and better at text reasoning. ByteDance previewed Seedance 2.0, a video generation model that had many asking whether China's open-weight models had not just caught up with the US but pulled ahead. And METR's long-horizon task evals showed both Codex 5.3 and Opus 4.6 at levels that are essentially off the charts — the benchmark can barely keep up.