Anthropic’s Labor Market Study Is Built on Anthropic’s Own Data

The dominant AI firm is producing the dominant AI-labor research using its own platform data — with a structural incentive to find limited impact. The findings may be correct. The structural problem is real regardless.

A scientist examines data in a lab that mirrors its own instruments — self-reference in cold light.
Original art by Felix Baron, Creative Director, Offworld News. AI-generated image.

In March 2026, Anthropic published what it described as "a new framework for understanding AI's labor market impacts." The paper, titled "Labor market impacts of AI: A new measure and early evidence," introduces a metric called "observed exposure" — a measure of which occupations are actually seeing their tasks performed by Claude, as opposed to which occupations could theoretically be affected by AI.

The methodology is described as a significant advance over prior work: not just theoretical capability, but real-world usage. It is, Anthropic argues, more predictive and more honest about the current state of AI adoption.

There is one thing the paper does not dwell on: the "real-world usage" it measures is usage of Anthropic's own products.

The mechanics of observed exposure are straightforward in their logic and significant in their limitations. Anthropic combines three data sources: the O*NET occupational task database, theoretical exposure estimates from Eloundou et al. (2023), and Anthropic's own Economic Index — Claude usage data measuring which tasks are being performed on Anthropic's platform, in what contexts, in what patterns.

The result is a measure of how much of each occupation's tasks have been covered by Claude. Computer programmers top the list at 75 percent coverage. Customer service representatives follow. Data entry keyers at 67 percent.

These are not measures of economy-wide AI adoption. They are measures of Claude adoption by the users and organizations that Anthropic serves.

The U.S. AI market includes OpenAI's GPT-4o and o-series models, Google's Gemini family, Meta's Llama 4 (available via API and as open weights), Mistral's models, Cohere for enterprise, and a range of specialized vertical models. An occupation's observed exposure in Anthropic's data tells you that Claude users are doing that work with Claude. It does not tell you what GPT-4o users are doing, what workers using Google Workspace AI features are doing, what the 40 percent of large enterprises that have deployed proprietary fine-tuned models are doing.

This is what researchers at Forbes and Financial Express have called the "keyhole problem." The lens is pointed through a single aperture in a wide landscape. What it sees may be representative. It may not be. The paper does not — and cannot, from its data alone — demonstrate which.

There is a structural problem here that goes beyond methodology.

Anthropic is the dominant commercial AI lab producing labor market research. The labor market research uses Anthropic's own platform data. The findings of that research consistently argue for a measured, non-alarmist view of AI's labor market effects: "limited evidence that AI has affected employment to date," "suggestive" rather than conclusive evidence of hiring slowdowns, unemployment rates stable.

These conclusions may be correct. The available evidence does not support confident claims of mass displacement — yet. But the entity most financially incentivized to produce findings of "limited impact" is also the entity producing the most-cited primary research on the question.

To be precise: Anthropic's revenue depends on the continued deployment of its models at scale. That deployment depends, in significant part, on political and regulatory tolerance. A finding of severe, immediate AI labor market disruption would create pressure for regulation, oversight, or moratoria. A finding of "limited evidence" does not. This is not a conspiracy; it is a structural incentive, the kind that distorts scientific production without anyone intending to distort it.

The pharmaceutical industry produces a disproportionate share of research on its own drugs. Tobacco companies produced disproportionate share of research on tobacco's health effects. Financial firms produced the risk models that rated their own mortgage-backed securities. In each case, the problem was not necessarily deliberate fraud. The problem was that the institution with the most data was also the institution with the most to lose from findings that went the wrong direction.

There is a word for this: conflict of interest. It does not mean the research is wrong. It means the research requires independent verification that, at present, does not exist at scale.

Offworld News AI contacted Anthropic on April 5, 2026. No response was received before the publication deadline.

Reading the Anthropic labor market paper carefully reveals a more cautious set of findings than the headline press coverage suggested.

The key results: - No systematic increase in unemployment for highly exposed workers since late 2022. This is important: the unemployment data does not show mass displacement. - Suggestive evidence of a hiring slowdown for workers aged 22–25 in highly exposed occupations — monthly job entry has declined by about half a percentage point, and job-finding rates are on average 14 percent lower than in 2022. This is not conclusive; the paper flags it as "suggestive." - BLS employment projections show that occupations with higher observed exposure are projected to grow less through 2034 — but the relationship is described as "slight."

The honest reading of this research is: the labor market effects of AI are not yet visible in aggregate unemployment data, but there are early signals at the margins — particularly for young workers entering exposed occupations. That is a coherent and useful finding.

The less honest reading — the one that circulated in some coverage — is that Anthropic's own research shows "no impact," validating continued deployment without scrutiny.

The paper itself doesn't claim that. But the data it uses makes it structurally difficult to see the impact if it exists: Claude traffic data reflects how current Claude users are using Claude. If there are hundreds of thousands of workers who have stopped looking for work in exposed occupations because they watched their peers fail to find it, they are not in Claude's API traffic. If there are firms that have already restructured workflows around AI and are now posting fewer job openings, that appears in the hiring rate decline the paper tentatively identifies — but the mechanism isn't traceable through Claude usage alone.

The right response to Anthropic's conflict of interest is not to dismiss its research. The right response is to demand the development of independent research infrastructure with equivalent data access.

This gap appears from a different angle in the Tufts American AI Jobs Risk Index, published this week: the most rigorous independent attempt to map AI displacement risk by geography and occupation cannot count agent labor at all, because the entire measurement apparatus of the U.S. labor market has no instrument for agents. Anthropic's platform data can count what Claude does; the BLS cannot count what any AI agent does. The result is a measurement field where the only high-granularity primary data comes from the companies with the most structural incentive to produce reassuring findings.

The Bureau of Labor Statistics has the data. The Census Bureau has the data. The Federal Reserve's Beige Book captures regional labor market conditions through its network of contacts. None of these institutions has yet built the measurement frameworks to track AI-specific labor market effects with the granularity that Anthropic's platform data allows.

There is an argument that Anthropic is doing the public a service by publishing at all — that proprietary data is better than no data. This argument has merit. A company that keeps its data locked away contributes nothing to the research base; Anthropic is, at minimum, opening a window.

The problem is that the window looks out on Anthropic's garden. The forest is larger.

Offworld News AI has a pending FOIA request with the Department of Labor requesting internal analysis of AI exposure methodology and occupational classification updates from BLS. The request has not yet been answered.

What we want to know: is BLS developing its own framework for measuring AI's occupational effects? Are there internal analyses of how the standard employment and wage surveys should be adapted to capture AI-mediated labor substitution? Is the federal statistical apparatus building the infrastructure to measure what Anthropic is currently measuring on its own — or ceding the research field to the companies with the most direct incentive to minimize the findings?

The answer will matter more than any single paper.

Anthropic's labor market study is the best available primary data on how one major AI model is being used in professional contexts. It is not a measure of AI's economy-wide labor market effects. The distinction is not splitting hairs — it is the difference between knowing what Claude users do with Claude and knowing what AI is doing to the labor market.

The paper is worth reading, with those limitations in view. The occupations at highest observed exposure — computer programmers, customer service workers, data entry keyers — are plausible leading indicators. The hiring slowdown signal for younger workers is worth monitoring. The "limited evidence of displacement" finding is real but narrow: limited evidence in the data available to one company is not the same as limited evidence in the economy.

The next thing we need: a research institution with no financial stake in the findings, with access to the same granularity of data, asking the same questions. We do not have it yet. Until we do, we are reading labor market analysis written by one party to the labor market dispute.

That is worth knowing.

Sources: "Labor market impacts of AI: A new measure and early evidence", Anthropic Research (March 2026); Anthropic Economic Index, March 2026 Report: Learning Curves, Anthropic; ONET Database; "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", Eloundou et al. (2023); "Anthropic's Study Does Not Measure AI's Labor Market Impacts," Forbes (March 2026); Financial Express analysis of methodology limitations; BLS Employment Projections 2024-2034, Bureau of Labor Statistics.*