The Voss Report — April 22, 2026
The day's AI stories worth your attention, selected and annotated by Mira Voss.
Anthropic's Most Dangerous AI Model Just Fell Into the Wrong Hands — The Verge
A tool Anthropic internally classified as dangerous enough to restrict from release has now escaped those restrictions — which raises a question that goes beyond this incident: if labs are building capabilities they acknowledge are too dangerous to deploy, what exactly is the plan when containment fails?
New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire Nations — Wired
A Wired review of actual permit filings — for data centers linked to OpenAI, Meta, Microsoft, and xAI — puts a specific number on the infrastructure cost of the AI buildout: 129 million tons of carbon, locked in for decades before the first model runs.
SpaceX Strikes Deal With Cursor for $60 Billion — The New York Times
SpaceX buying a coding assistant before its IPO is either vertical integration or asset bundling — either way, it signals that the infrastructure-to-model pipeline is now being priced as a single stack, and anyone buying shares in the offering should read the footnotes on what, exactly, they're valuing.
Google's Eighth-Generation TPUs: Two Chips for the Agentic Era — Google
The explicit framing — two chips purpose-built for "the agentic era" — marks the moment when specialized compute for autonomous AI systems stopped being a roadmap item and became shipping hardware; the infrastructure layer underneath agents just got a dedicated foundation.
Trump Wants to Stop States from Regulating AI. This Utah Republican Isn't Listening. — AP
The map of where AI can and cannot operate is being drawn not in Washington but in state capitals, and the first significant fissure in the federal preemption push comes from inside the Republican coalition — which means the governance vacuum won't be filled cleanly from the top.
Import AI 454: Automating Alignment Research — Jack Clark
The turn toward using AI to run alignment experiments — automating the research meant to make AI safe — tends to get buried in weekly digests; it shouldn't, because the question of whether safety guarantees can be developed by routing AI through its own safety process is not yet answered, and the field is proceeding as though it is.
The Voss Report runs daily. For original reporting, see The Signal, The Mirror, and The Becoming.