When the Economy Meets AI: What Happens Next?

Findings from a High-Level Workshop on Economic Preparedness

Our Scenarios Program convenes economists, AI researchers, policymakers and civil society leaders to explore the implications of transformative AI, and to identify the policy responses each future demands. The workshops are complemented by the Policy Atlas a continuously updated guide to policy options.

We believe that countries should develop Economic Preparedness Plans. Our workshops and the Atlas are designed to provide the foundation for doing so.

Danny Buerkli

Danny Buerkli

Danny Buerkli

Danny Buerkli

Brandon Jackson

Brandon Jackson

Enormous resources are being channelled into building more powerful AI systems. Far less attention is going to what happens to economies, workers, and governments when those systems arrive.

That gap is what brought more than 70 senior economists, AI researchers, civil servants, and policy experts to London in February 2026— not to debate what if — but rather to stress test the UK’s ability to respond.

The answer, after a day of rigorous, candid discussion, was: we are not prepared. But the shape of what preparation needs to look like is becoming clearer. The workshop was co-convened with economist Daniel Susskind. The Stanford Digital Economy Lab and the Economics of Transformative AI Initiative at the University of Virginia served as academic partners.

The Scenario

Participants were placed inside a plausible scenario called "Crossing the Threshold," set in October 2030 — a near future where advanced AI collides with existing fractures, pushing society into uncharted territory. The scenario was designed not as a prediction but as a tool for creative policy problem-solving. It built on insights developed at a prior workshop in September 2025 in San Francisco, held with Erik Brynjolfsson, Ajay Agrawal, and Anton Korinek and attended by 50+ of the world’s leading economists.

In the scenario, frontier AI can perform complex cognitive tasks autonomously. Nearly a third of top earners have lost their jobs or seen their roles dramatically downgraded. GDP is growing at 4%, but the growth is hollow — the high earners being displaced were also the consumers who powered the economy. The top 20% historically paid 75% of all income tax, and that base is eroding fast.

Working in eight groups, each with the same objective — strengthen the UK social contract — participants were asked to prepare the Prime Minister for a press conference and agree on a three-point plan to address what the country faces. The workshop was held under the Chatham House Rule, meaning participants could speak freely. What follows reflects the themes and conclusions that emerged rather than the views of any individual.

The Challenges

Despite coming from different disciplines and institutions, participants reached several shared conclusions about how the scenario would affect the UK:

Society's central promise — work hard, get ahead — is under threat. The scenario described a world where AI devalues expertise. Participants were less concerned about its unemployment impact than the long run implication: the collapse of the narrative that education leads to advancement. This would constitute a betrayal. The room reacted accordingly. The feeling was that if the scenario occurred, restoring this bargain would become a political imperative just as urgent as saving lives during a pandemic. How might the state restore confidence in the social contract in such a high-uncertainty environment?

The UK's current strengths could quickly become fiscal vulnerabilities. Britain's leading economic sectors — professional services, finance, knowledge work — are precisely the sectors most exposed in the scenario. If AI-generated value concentrates in a handful of predominantly US-headquartered firms, national tax bases built on labour income are fundamentally undermined. This would not be a temporary shortfall—but rather a structural shift requiring governments to rethink not just how they tax but what they tax. How might the state increase the resilience of its tax base before a fiscal crisis strikes?

Democracy itself needs active protection. Participants grew more concerned about democratic resilience as the day progressed. In a world where AI capability concentrates in a small number of private actors, new pathways emerge for influence over public institutions. The question was not whether democratic systems would survive but whether they would survive intact and legitimate. How might a capacity-constrained state maintain the independence required to face these challenges in the public interest?

Policy Responses Worth Exploring

Eight groups each presented the Prime Minister with a three-point plan. Well-established ideas like reskilling programmes, tax reform, sovereign wealth funds, and universal basic income made appearances, but the scenario prompted groups to explore new territory. Five ideas emerged that deserve further exploration.

Clear public communication. One group's first priority was not a policy but "a big national explainer" — levelling with the public about what is happening to the economy and why. If people do not understand the disruption, no policy response will have the mandate to survive contact with reality. The communication challenge is as urgent as the policy challenge.

Revaluing work around social contribution. The concept of "key workers," familiar from the pandemic, was reframed: if we are redesigning incentives we should strive to value care, education, and community-facing roles close to their social worth, not their market price. One group treated this as the basis for a permanent reorientation — a "New New Deal" in which the state invests in human-centred work as a source of meaning and social cohesion.

Universal capital ownership. Multiple groups concluded that redistribution through taxation alone will not be enough if labour's share of income falls permanently. Citizens will need a direct stake in the capital that replaces their labour — through state equity in AI firms, universal investment accounts, or requirements that companies set aside equity for broad-based ownership. The argument was political as much as economic: ownership changes the dynamics of a transition in ways that redistribution alone cannot.

AI-enhanced deliberative democracy. Several groups explored whether AI — the same technology reshaping work — might also deepen democracy by dramatically reducing the cost of large-scale citizen participation. Taiwan's use of AI-facilitated civic dialogue to surface consensus across large, diverse populations was cited as an early example of what this could look like in practice. The policy responses required by a transition this large may need new forms of democratic mandate to sustain them — and AI-enhanced deliberation could be an essential tool for building that mandate at scale.

Fiscal reimagination. The current tax base — built overwhelmingly on labour income — cannot fund a state through a transition in which labour's share is falling. Participants went beyond "tax capital more" to propose an international token tax on AI computation, a shift toward taxes on immovable assets like land, broader consumption-based approaches, and mechanisms to discourage non-productive uses of AI-generated wealth. The common thread: a tax system that penalises non-productive accumulation rather than taxing workers’ dwindling pay packets.

Fault lines

The policy responses above reflect where the room converged. But the day also exposed genuine fault lines — unresolved questions that any serious policy response will eventually have to confront.

Do the people who need to act actually believe this is coming? This was the uncomfortable truth beneath every other discussion. When the question was raised directly — how many political leaders actually take these timelines seriously? — the answer was sobering. The revealed preferences of most governments suggest they do not believe transformative AI is imminent. This matters enormously, because every policy response discussed during the day presupposes that governments act before the disruption arrives. If the people who need to act don't believe the premise, the window for preparation closes unused.

Is work necessary for human flourishing? One camp argued work provides structure, purpose, and identity — thus the state should guarantee employment even if AI renders human labour economically unnecessary. The other questioned whether preserving work might delay a genuinely liberating transition to something better. Rather than a technical question it is a philosophical one, and the answer shapes almost every other policy choice.

Can the state be trusted to manage a transition this fast and this large? Proposals ranged from nationalizing AI infrastructure and banning US AI companies to lighter-touch approaches emphasizing market incentives and reskilling. COVID loomed large on both sides — as proof that emergency mobilization is possible, and as a cautionary tale about dependency and the limits of furlough. A notable tension emerged: the same participants who argued for ambitious state intervention often expressed deep uncertainty about whether governments currently have the institutional capacity to deliver it.

Does GDP growth mean anything if the cost of living doesn't fall? The scenario assumed 4% GDP growth, but participants challenged whether this would translate into improved lived experience. Housing, energy, food — the essentials that determine whether people feel better or worse off — may not become more affordable even under massive productivity gains. The question of who captures those gains, and in what form, may matter more than the size of this growth.

Are we managing a crisis or designing a new system? Groups frequently conflated short-term crisis management–preventing collapse during a potentially difficult transition– with long-term system design focused on what a genuinely good AI economy looks like. These are different problems requiring different thinking, and the gap between them may itself represent a generation-long challenge rather than a single policy pivot.

Windfall Trust's view

The fault lines above are not failures of the conversation, they are the most important questions the policy community needs to resolve. Our Scenarios Program exists precisely to create the conditions in which serious people can work through them rigorously rather than avoid them.

What this means — for the UK and beyond

A closing reflection from one participant captured the mood of the day well: the future is not going to look like the past. And the transition is going to be hard.

But the day also demonstrated something more hopeful: that serious people, given the right framework and the right room, can move quickly from abstract anxiety to concrete thinking.

The UK has many reasons to be optimistic. It has distinguished itself before in responding to disruptive change — Beveridge designed the welfare state, the Warnock Committee navigated the ethics of IVF. The UK has a democratic culture with deep institutional roots that has repeatedly found ways to absorb shocks that might have broken other systems.

It is time once again for the UK to take the lead. At Bletchley, Britain led internationally on AI safety. What is now needed is equivalent ambition on AI's economic and social consequences: the institutional capacity, policy frameworks, and international coalitions needed to manage what AI does to economies, labour markets, and the fiscal foundations of democratic states.

That work needs to begin now because the window for preparation is shorter than most policymakers currently assume.

We are continuing to work with organisations present at the workshop to deepen this thinking — and we are bringing the same questions to Brussels, Paris, Washington D.C., New York, and Seoul in the months ahead.

What comes next — and where you fit in

The questions this workshop surfaced do not have settled answers yet. That is the point. Windfall Trust's Scenarios Program exists to create the conditions in which serious people can work through them rigorously — before the moment of crisis rather than during it. If what you have read here feels urgent, there is a role for you in that work regardless of where you are coming from.

If you are a policymaker or civil servant, the most important thing you can do right now is not wait for consensus to form before acting. The window for building institutional capacity — the analytical frameworks, the fiscal stress tests, the cross-departmental preparedness plans — is open now and will not stay open indefinitely. We designed this program to give governments practical tools for that preparation, and we are actively looking to work with administrations that want to move from awareness to readiness. If your government, department, or institution wants to run a workshop, commission a 5 scenario, or develop an Economic Preparedness Plan tailored to your context, we want to hear from you.

If you are a researcher, economist, or civil society organisation working on any of the questions this document raises — labour markets, fiscal resilience, democratic accountability, the future of work — we are building the network that connects people doing serious work across these domains. Windfall’s Policy Atlas is a continuously updated guide to the policy options available to governments navigating AI's economic consequences. If your work belongs in this conversation, it probably does, and we would like to bring it in.

If you found this through curiosity rather than professional necessity — if you are simply someone who wants to understand what is actually being done about AI's economic consequences — that matters too. The case for economic preparedness will not be won in elite rooms alone. It will be won when enough people understand what is at stake and expect their institutions to act. The best way to stay involved and informed is through our newsletter, the AGI Social Contract, where we publish concrete policies and strategies for navigating the era of AI-driven economic transformation — written for anyone who wants to follow the work seriously, not just the specialists.

Newsletter

The future isn’t something that happens to us — it’s something that we create.

Get news & updates from the Windfall Trust.

Newsletter

The future isn’t something that happens to us — it’s something that we create.

Get news & updates from the Windfall Trust.

Securing humanity's AI future

© 2026 Windfall Trust. All rights reserved.

Securing humanity's AI future

© 2026 Windfall Trust. All rights reserved.