Policy Snapshot

Giving citizens a direct ownership stakes in AI infrastructure via equity stakes

Scenario

Gradual
Augmentation

All Scenarios

Rapid
Automation

Scope

Near Term
(Volatility Risks)

Medium Term
(Transition Risks)

Long Term
(Structural Risks)

Governance Level

Local

National

International

Target

Entrepreneurs

Displaced Workers

Primary Actor

Governments

Private Actors

/

Regulation & Market Design

/

Legal & Regulatory Frameworks

Regulatory Agencies

Dedicated regulatory bodies with the technical capacity to monitor high‑risk AI systems, model AI-driven economic structural change, forecast labor market disruptions, and stress-test social safety nets against scenarios of rapid automation.

What it is:

This approach focuses on building or adapting public institutions that can monitor, license, and constrain the deployment of AI systems, especially in safety‑critical or rights‑sensitive domains such as finance, healthcare, employment, and critical infrastructure. These units can be tasked with predictive modeling to anticipate which sectors, regions, and demographics face the highest risk of displacement before layoffs occur. By running simulations on AI capability curves and adoption rates, they can inform proactive policy adjustments, such as triggering automatic stabilizers and directing retraining funds to vulnerable regions.

Recommended Reading:
Susan Athey and Fiona Scott Morton

December 2025

The authors advise that governments create a digital regulatory agency charged with following developments in AI and conducting studies on topics such as safety, national security, effects on labor, and any other important issue, with a mandate to consider the impact of regulations on competition. Such a regulatory agency could be tasked to review mergers, alliances, investments, and contracts between parties in the AI stack.

Gillian Hadfield and Jack Clark

April 2023

Hadfield and Clark propose "regulatory markets" as an alternative to both traditional government regulation and industry self-regulation, addressing the twin challenges of governments' technical deficits and industry's democratic deficits. Under this model, governments would set outcome-based requirements (either metrics-based or principles-based), license private regulators who compete to provide regulatory services, and require AI companies to purchase regulatory services from these licensed entities. The authors argue this approach would drive investment in regulatory technologies that governments are unlikely to build themselves, while maintaining democratic accountability through government-set objectives and licensing standards. Private regulators could operate at global scale by obtaining licenses from multiple jurisdictions, potentially threading the needle between diverse national regulatory requirements and the borderless nature of AI deployment.

U.S. AI Action Plan

July 2025

As part of the U.S. AI Action Plan, the AI Workforce Research Hub is being established under the Department of Labor (DOL) to spearhead a federal effort evaluating AI's effect on the labor market and workers. In partnership with the Bureau of Labor Statistics (BLS) and the Department of Commerce (DOC), the Hub will produce ongoing analyses, engage in scenario planning to model various levels of AI impact, and create actionable insights to guide future workforce and educational policies.

Senator Jim Banks (R-Ind.) and bipartisan cosponsors

December 2025

The AI Workforce PREPARE Act establishes an AI Workforce Research Hub under the Department of Labor to implement the White House's AI Action Plan. The legislation enhances DOL's authority to hire AI experts, improves AI-related questions in federal surveys, and strengthens the Bureau of Labor Statistics' occupational projections. The Hub would convene researchers, technical experts, business, and labor representatives to improve data collection on AI's workforce impacts, while conducting prize competitions to better understand AI adoption patterns and how AI systems augment or automate tasks across occupations.

Real-world precedents:
  • The Congressional Office of Technology Assessment (OTA) (1972-1995) served a similar function, providing objective analysis of emerging technologies' impacts to legislators until its defunding.

  • In the financial sector, the Federal Reserve's stress tests offer a model for scenario planning, by testing banks against hypothetical economic shocks to verify solvency.

Securing humanity's AI future

© 2026 Windfall Trust. All rights reserved.