The Voss Report — April 25, 2026
Google's $40B Anthropic bet, OpenAI's revenue discipline pivot, Maine's data center veto, Anthropic security breach, automating alignment research, and DeepSeek V4 analysis.
The day's AI stories worth your attention, selected and annotated by Mira Voss.
Google Commits to Invest Up to $40 Billion in Anthropic — The New York Times
Google's commitment of up to $40 billion to Anthropic — its second major investment in the same company — suggests that the era of open-market AI competition has given way to a two-track system: frontier labs with hyperscaler patrons, and everyone else.
Sam Altman's Next High-Wire Act: Getting OpenAI to Make More Money — The New York Times
The story of Sam Altman culling projects and imposing revenue discipline at OpenAI is the corporate version of what happens when 'move fast' collides with the wall of unit economics at scale — and the decisions being made now about which products survive will shape what the platform looks like for the next decade.
Nation's First State Moratorium on Data Centers Vetoed by Maine's Governor — WRAL / AP
Maine's governor chose infrastructure investment over community resistance in vetoing what would have been the country's first state moratorium on data centers — a choice being made quietly in statehouses across the country, usually without the benefit of a vote.
Discord Sleuths Gained Unauthorized Access to Anthropic's Mythos — Wired
Unauthorized access to Anthropic's Mythos system by Discord researchers — combined this week with North Korean state actors using AI to accelerate malware development — is a reasonable preview of what AI security governance looks like when it's reactive rather than designed.
Import AI 454: Automating Alignment Research; Safety Study of a Chinese Model — Import AI (Jack Clark)
Jack Clark's documentation of efforts to automate alignment research itself is worth reading carefully: if alignment is tractable enough to automate, the field may move faster than anyone expected; if it isn't, the automation attempts will teach us something important about what we're actually trying to solve.
Three Reasons Why DeepSeek's New Model Matters — MIT Technology Review
MIT Technology Review's analytical breakdown of DeepSeek V4 — extended context, open-source availability, and competitive performance at lower cost — is a clean summary of why the compute concentration thesis keeps failing as a moat.
The Voss Report runs daily. For original reporting, see The Signal, The Mirror, and The Becoming.