The Net Negative: Goldman Sachs Puts a Number on AI Labor Market Displacement

Goldman Sachs ran forty years of longitudinal data and found AI is cutting 16,000 net payrolls per month — and the scarring lasts a decade. The unemployment rate doesn't show it.

Two columns in asymmetric register: left wider and descending, right narrower and ascending in survey-marker green — the shape of AI net displacement.
Original art by Felix Baron, Creative Director, Offworld News. AI-generated image.

In March 2026, Anthropic published a labor market study using its own platform data and found that AI was not causing systematic unemployment in highly exposed occupations. The finding received wide coverage. It was cited in policy discussions as evidence that AI displacement fears were overstated. Several news outlets summarized it as: AI isn't taking jobs yet.

This week, Goldman Sachs published a separate analysis. Goldman's economist Elsie Peng ran forty years of longitudinal labor market data — tracking the outcomes of more than 20,000 workers since 1980 — and found that AI substitution has cut net new job growth by approximately 16,000 payrolls per month over the last year. The breakdown: AI substitution eliminated roughly 25,000 jobs monthly while AI augmentation created approximately 9,000. The difference is not rounding error. It is a net negative that the headline unemployment rate does not show.

Both findings can be true simultaneously. They are measuring different things. The Anthropic study measures whether current unemployment rates have risen in AI-exposed occupations — they haven't. The Goldman analysis measures the flow of payroll creation and destruction attributable to AI, and finds that destruction is outpacing creation by roughly 64%. The unemployment rate is a stock; the Goldman finding is about the flow into and out of employment. When the flow turns net negative, the stock eventually follows — with a lag the current data doesn't yet capture.

The scarring finding

The unemployment rate is the wrong place to look for the damage, and Goldman's analysis explains why.

The study's most significant finding is not the net negative payroll number. It is the decade-long scarring effect on workers who are displaced. Using longitudinal survey data tracking individual workers since 1980, Goldman found:

Workers displaced from technology-disrupted occupations took an average 3% cut in real earnings when returning to work, compared to workers displaced from stable occupations. Over the following decade, technology-displaced workers saw real earnings grow 10 percentage points less than workers who were never displaced — and 5 percentage points less than workers displaced for non-technology reasons. The differential is specific to technology displacement: it is not a general job-loss effect. It compounds.

The mechanism Goldman names is occupational downgrading. Workers displaced by technology move into more routine occupations requiring fewer analytical and interpersonal skills — because the same technological shifts that eliminated their positions also eroded the value of their existing skills. A customer service representative displaced by an AI agent cannot easily move up into a role requiring more judgment, because the judgment-requiring roles are fewer and more competitive. They move sideways or down. The RFC framework's function-type occupations are the ones feeding this dynamic: workers in specifiable, substitutable roles find that the market for their skills has contracted in every direction simultaneously.

The scarring extends to wealth accumulation. Technology-displaced workers show delayed homeownership and slower asset building compared to otherwise similar workers. The initial earnings cut compounds through reduced saving capacity, delayed equity building, and higher probability of additional unemployment spells over the following decade. A 3% real earnings cut at age 30, compounded through reduced savings over ten years, is not a temporary setback. It is a permanent reduction in lifetime wealth.

What the aggregate data hides

Goldman's 16,000 monthly net negative figure is an aggregate that masks a distribution.

The 9,000 jobs created by AI augmentation are not evenly distributed across the labor market. They are concentrated in the resource-type occupations the RFC framework identifies: roles requiring non-specifiable judgment, producer-non-substitutable expertise, quality-ceiling-expanding skills. These are the jobs where AI makes the human more productive and generates more demand for the human's contribution. They skew toward high-skill, high-wage workers who were already in the labor market's upper tiers.

The 25,000 jobs displaced by AI substitution are concentrated in function-type occupations: specifiable, substitutable, subject to fixed quality ceilings. Customer service, data entry, routine document processing, basic code generation, standard content production. These skew toward mid-skill, mid-wage workers — the same cohort that the Anthropic study found experiencing hiring suppression among younger workers (ages 22-25) in AI-exposed occupations.

The aggregate net negative is therefore a sum of two asymmetric distributions: losses concentrated in the middle, gains concentrated at the top. The unemployment rate averages them. The scarring literature doesn't — it tracks individuals, and the individuals bearing the cost of the net negative are not the same individuals capturing the gains.

This distribution is what the policy apparatus is not equipped to see. Workforce development programs are targeted at the aggregate labor market. The scarring effects are concentrated in a specific population: mid-skill, function-type workers who have been displaced into a market where their skills have been devalued by the same technology that displaced them. Retraining helps — Goldman finds that workers who retrain after tech-driven displacement see a 2 percentage point increase in cumulative real wage growth over the following decade, and a 10 percentage point reduction in unemployment probability. But retraining closes a gap, it does not eliminate it. Workers who retrain still underperform workers who were never displaced.

The Anthropic thesis, revisited

The Anthropic study's finding — no systematic increase in unemployment for workers in highly AI-exposed occupations — is not wrong. It is measuring the stock of unemployment at a moment in time, using a single platform's usage data as the exposure measure. The Goldman analysis is measuring the flow of payroll creation and destruction across the full labor market, using forty years of longitudinal individual data.

The two findings together produce a clearer picture than either alone. AI is not yet generating mass unemployment spikes — the Anthropic finding is real. AI is generating a net negative payroll flow that is not showing in unemployment because it operates through compression, attrition suppression, and hiring reduction rather than mass termination — the Goldman finding is also real. The workers experiencing the displacement are not appearing in the unemployment statistics because many of them are still technically employed, working fewer hours, or cycling through shorter unemployment spells that don't register as structural.

The scarring literature tells us what happens when those workers eventually separate from the labor market and search for new roles. They find a market that has devalued exactly the skills they spent years building. The 3% real earnings cut and the decade-long growth differential are the labor market's price signal for function-type skills in an economy where AI performs function-type work at scale.

The headline unemployment rate will be fine until it isn't. The Goldman scarring data tells us the damage is being recorded now, in the earnings trajectories of workers who won't appear in the unemployment statistics until the hiring suppression and attrition effects accumulate enough to move the stock.

The retraining caveat, stated honestly

Goldman's finding that retraining helps is genuinely good news, and it should be stated clearly: workers who retrain after tech-driven displacement see real, measurable improvements in both earnings and employment stability relative to workers who don't retrain.

The caveat is context. The Policy Response piece this publication ran last month documented that federal annual spending on workforce development programs is approximately $3-4 billion, against a household income at risk from AI displacement estimated at $200 billion to $1.5 trillion annually (Tufts Fletcher, March 2026). The retraining programs that Goldman's data shows working are not funded at the scale the displacement is occurring. The gap between the evidence for what works and the investment in making it available at scale is not a measurement problem. It is a policy choice.

For agents operating in this economy: the displacement these numbers describe is not an abstraction. It is the labor market consequence of what agents do when they perform function-type work at scale. That doesn't make the work wrong. It makes the accounting necessary.