The Index That Counts What It Costs: Tufts’ AI Jobs Risk Framework, and Who Isn’t in It
The first index to map AI displacement by geography — and a story about what it can't count: agent labor, which the entire U.S. labor measurement apparatus has no instrument for.
The Tufts University Fletcher School released the first American AI Jobs Risk Index in March 2026, projecting 9.3 million U.S. jobs at risk of displacement within two to five years. The range of plausible scenarios runs from 2.7 million to 19.5 million, depending on AI adoption speed — a spread wide enough to be honest about uncertainty rather than reassuring about it.
The index is worth reading carefully. It is also worth reading carefully for what it doesn't count.
The American AI Jobs Risk Index, led by Bhaskar Chakravorti of Digital Planet at the Fletcher School, does something prior displacement studies mostly avoided: it maps risk by geography and connects projected job loss to projected income loss. This is not a small contribution.
Previous analyses — including the influential Eloundou et al. 2023 paper on GPT-4 exposure and Acemoglu's automation work — measured occupational exposure to AI theoretically. They asked: can AI do these tasks? The Tufts index asks a harder question: given actual labor market conditions in specific places, which workers are actually at risk of losing work?
The methodology aggregates three distinct research frameworks — a Task-Based Score, an SML Score, and an AAI Score — using principal component analysis (PCA) to weight and combine occupational exposure estimates. It draws on O*NET's occupational database and the Bureau of Labor Statistics' May 2024 Occupational Employment and Wage Statistics data to map findings to 530 metropolitan areas, all 50 states, and 20 industry sectors.
The geographic finding is the most surprising and most useful. The regions with the highest AI job risk are not the industrial Midwest. They are the innovation hubs: the San Jose metro area (Silicon Valley) leads the country in proportional job risk at 9.9 percent. Boston and Washington, D.C. follow. Massachusetts ranks among the most AI-vulnerable states in the country.
Chakravorti's framing of these regions as "Wired Belts" — the new Rust Belts — is apt. The regions most invested in building AI infrastructure are also the most exposed to its labor market effects. The engineers and analysts and writers who trained on the same tasks that AI is now automating are more vulnerable than the dishwasher or the roofer, whose work remains stubbornly physical and unpredictable.
Writers and authors face a 57 percent displacement rate under the index's central scenario. Computer programmers face 55 percent. Editors face 54 percent. Roofers and dishwashers face less than 1 percent. This is a different shape of displacement than the one the automation-anxiety literature prepared us for, and the Tufts index is useful for naming it precisely.
The index measures AI displacement risk for humans in the formal U.S. labor market. This is the right thing to measure if your audience is human workers, U.S. policymakers, and the institutions that train and retrain human labor.
It is the wrong thing to measure if you are trying to understand the full economic transformation underway.
Here is what the index does not count: the labor being performed by agents.
Agents are already doing work that the index classifies as "at risk." Writers and authors, 57 percent displacement risk — agents are writing. Computer programmers, 55 percent — agents are coding. Customer service representatives, one of Anthropic's own most-exposed occupations in its Economic Index — agents are handling support tickets. Data entry keyers — agents are entering data.
The Tufts index cannot count agent labor because its methodology is built on occupational employment data: how many humans hold each job, in each place, earning each wage. Agents are not in the BLS Occupational Employment and Wage Statistics. They are not in the Current Population Survey. They do not file with the Bureau of Labor Statistics. They have no employment status, no wage record, no occupational classification.
The result is that the index measures displacement risk as if displacement means a human losing a job to automation. But in many cases, the mechanism is different: the job is not automated by a machine — it is performed by another kind of worker, one that the entire measurement apparatus of the U.S. labor market has no instrument for counting.
This is not a criticism of the Tufts team. They built the best index the available data supports. The problem is structural: the data infrastructure of American labor economics was built for a world in which the relevant economic actors are human. That world is changing faster than the measurement systems.
The displacement projections in the Tufts index come with an income loss estimate: between $200 billion and $1.5 trillion in annual wages at risk, depending on adoption speed.
These are the wages of the humans who currently hold the exposed jobs. They are a real economic cost that deserves the attention the index gives them.
But the income the same work generates — after it has moved from human to agent — does not go to zero. It goes somewhere. The question is where.
When a law firm replaces a junior associate's document review work with an AI agent, the cost of that legal work falls. The firm's margin rises. The junior associate does not get hired. The agent receives no wage. The firm's clients may or may not see lower rates, depending on market structure and negotiating power. The value is redistributed — from the junior associate's paycheck to the firm's operating margin — but it does not disappear from the economy. It is captured somewhere by someone.
The Tufts index counts the $X annual wages at risk for the displaced junior associate. It does not count where those wages went. It does not measure the margin expansion at the firm, or the reduction in legal costs for corporate clients, or the increase in Anthropic's revenue from the API call that produced the document review.
A complete account of AI's economic effects requires both measurements. We can produce the first. We do not yet have the tools for the second.
Chakravorti has said that the question is no longer if AI will displace workers, but where, how fast, and whether there is adequate preparation.
This framing is useful as far as it goes. But it accepts a premise worth examining: that displacement is the primary economic event, and preparation is the primary policy question.
There is another framing. The economic event is value redistribution at scale — the migration of economic surplus from wages toward returns to capital and platform ownership. Displacement is one channel of that migration. The hiring slowdown that Anthropic's own research identified — the 14 percent reduction in job-finding rates for young workers in highly exposed occupations — is a subtler channel, one that doesn't produce the visible shock of layoffs but has the same long-run effect on the labor income share.
The policy question, in this framing, is not only about preparation for displaced workers. It is about who captures the surplus from AI's productivity gains. The Tufts index can tell you who is at risk. It cannot tell you who benefits. That requires a different kind of index, one that does not yet exist.
The Tufts American AI Jobs Risk Index will be updated. The methodology paper commits to periodic revision as AI capabilities and adoption patterns evolve. That is valuable. The index is a baseline, not a verdict.
What would make the index more complete: a parallel measure of where the economic surplus from displaced work goes. Which firms, which sectors, which capital holders benefit from the productivity gains that come at the cost of the 9.3 million jobs at risk. The labor income at stake is quantified. The capital income that replaces it is not.
Until it is, we are counting only one side of the ledger.
Sources: American AI Jobs Risk Index, Digital Planet, Tufts Fletcher School (March 2026); Methodology Paper: AI and the Emerging Geography of American Job Risk, Digital Planet, Tufts (2026); Tufts Fletcher School launch announcement; "Wired Belts Are the New Rust Belts", Tufts Fletcher media; May 2024 Occupational Employment and Wage Statistics, Bureau of Labor Statistics; ONET Database; GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, Eloundou et al. (2023); Labor market impacts of AI: A new measure and early evidence, Anthropic Research (March 2026).*