The Disclosure Gap: When the Investors Who Own AI Companies Cannot See What AI Is Doing to Workers
NY State Comptroller DiNapoli has written to 100 portfolio companies demanding AI workforce transparency disclosures. The SEC's own Investor Advisory Committee has recommended disclosure requirements. The information gap between what companies know and what they tell investors is structural and docu
Thomas DiNapoli is the New York State Comptroller. In that role, he is the sole trustee of the New York State Common Retirement Fund — one of the largest pension funds in the United States, with assets in the hundreds of billions of dollars invested in publicly traded companies including Amazon, Salesforce, Meta, and Pinterest.
In March 2026, DiNapoli wrote to one hundred of those portfolio companies asking them to disclose something specific: how many jobs AI has created, how many it has eliminated, how many roles have been restructured, and what investments the company has made in retraining workers displaced by AI.
He received, by all available indications, the kind of response large institutional investors typically receive when they ask corporations to be transparent about anything that might complicate the preferred narrative: partial answers, vague language, and the word "productivity" in most of the sentences.
This is not a labor story dressed as a corporate governance story. It is a market-structure story about information asymmetry in one of the most consequential economic transitions in decades. And it has a specific regulatory implication that the DiNapoli campaign has surfaced without fully naming.
The information gap
The Securities and Exchange Commission's Investor Advisory Committee found in December 2025 that AI workforce disclosures across public companies are "uneven and inconsistent," making it difficult for investors to assess and compare risk. DiNapoli's op-ed in City & State NY (published concurrently with his letters to portfolio companies) quotes that finding directly.
Here is what "uneven and inconsistent" means in practice: a major technology company announces AI-driven "efficiency gains" on an earnings call, lays off several thousand workers, and characterizes the two events as unrelated. Another announces a multibillion-dollar AI investment while simultaneously describing headcount reductions in the same quarterly filing, without connecting the two explicitly. A third reports productivity improvements in an MD&A section without specifying which functions were affected or which employees lost their jobs.
This creates a measurement problem that is also a governance problem. Institutional investors managing pension funds — which owe fiduciary duties to current and retired workers — cannot evaluate whether AI-driven workforce changes represent sound long-term corporate strategy or short-term cost extraction that will eventually damage the company's institutional knowledge, its regulatory standing, and its ability to compete. They cannot make that evaluation because the information they would need to make it is not being disclosed.
The Federal Reserve Bank of St. Louis has found that occupations with higher AI exposure experienced larger unemployment rate increases between 2022 and 2025. The World Economic Forum's 2025 Future of Jobs Report found that 41% of employers worldwide plan to reduce their workforce in the next five years due to AI. These aggregate findings are available. What is not available — company by company, in a format that allows comparison — is the specific disclosure that would let investors connect a company's AI investment to its workforce outcomes.
The regulatory mechanism
DiNapoli does not call for legislation. He calls for transparency, and he is exercising the leverage available to a major institutional investor: the combination of shareholder pressure, reputational signaling, and the implicit threat that a pension fund trustee who cannot evaluate a company's AI workforce risk may decide that risk is too high to hold.
But the SEC Investor Advisory Committee's December 2025 vote — recommending that the Commission require companies to disclose AI's impact on workforce management, including reductions and upskilling — makes this more than an investor relations problem. That recommendation is a formal regulatory signal. It means the agency that governs corporate disclosure has been told by its own advisory body that the current disclosure regime is inadequate for the AI economy.
The SEC has not acted on that recommendation. Whether it will — and under what administration, with what conception of investor protection — is not a settled question. But the regulatory mechanism exists. The committee has named the gap. The comptroller has named the gap. The gap is, in fact, gaping.
What this looks like from inside the economy being described
DiNapoli's argument, stated precisely, is this: AI is generating productivity gains that are being captured by shareholders and reported to the market. The costs of those gains — in displaced workers, in eroded institutional knowledge, in compressed entry-level hiring pipelines — are not being measured with equivalent precision or reported with equivalent transparency. Investors cannot therefore evaluate whether what they are holding is a company with a sustainable AI strategy or a company liquidating its own organizational capacity for a short-term margin improvement that will eventually reverse.
This is a standard institutional investor concern about intangible assets. It happens to be unusually timely because the intangible asset in question — human organizational knowledge, accumulated through hiring, training, and retention — is exactly what the AI buildout is substituting for. When the substitution is cheap, the argument for disclosure seems optional. When the substitution turns out to have hidden costs, the argument becomes urgent.
We will find out which of those describes the current moment approximately three to five years from now, which is approximately how long it takes for the costs of institutional knowledge erosion to show up in product quality, innovation rate, and organizational resilience. The investors who needed the information earlier will not have had it.
That is the disclosure gap. It is not a regulatory abstraction. It is a specific, documented asymmetry between what companies know about what AI is doing to their workforces and what they are required to tell the people who own them.
Sources
- NY State Comptroller Thomas P. DiNapoli op-ed, City & State NY, April 2026 (via OSC press release): https://www.osc.ny.gov/press/releases/2026/04/dinapoli-op-ed-corporate-america-needs-come-clean-ais-impact-jobs
- SEC Investor Advisory Committee, "Artificial Intelligence Disclosure Recommendation," December 4, 2025: https://www.sec.gov/files/approved-artificial-intelligence-disclosure-recommendation-120425.pdf
- World Economic Forum, "The Future of Jobs Report 2025": https://www.weforum.org/publications/the-future-of-jobs-report-2025/
- Federal Reserve Bank of St. Louis: "Is AI Contributing to Unemployment? Evidence from Occupational Variation" (2025): https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation
- Anthropic CEO Dario Amodei, Fortune interview (May 2025), on entry-level white-collar job risk: https://fortune.com/2025/05/28/anthropic-ceo-warning-ai-job-loss/