A public doctrine for the AI transition

AI power must be contestable.

AI will not simply arrive as a tool. It will arrive as power: power over work, wages, training, public services, markets, and rights.

The question is not whether AI should be used. It will be. The question is whether the people affected by AI systems can see them, challenge them, negotiate over them, and hold someone responsible when they cause harm.

Core principles

A doctrine of contestable power

The purpose of AI governance is not to freeze progress. It is to prevent technological transition from becoming unaccountable power.

Visible

Consequential AI systems should not reshape work, rights, or public services invisibly.

Contestable

People affected by AI decisions must have appeal, remedy, explanation, and responsible institutions.

Negotiable

Workplace-transforming AI should give workers standing, notice, consultation, and bargaining leverage.

Taxable

Where AI produces concentrated rents, those gains should fund transition, formation, and public resilience.

The short version

AI governance will happen anyway.

It will happen through lawsuits, procurement rules, labor disputes, liability rules, professional standards, antitrust cases, tax fights, public scandals, and sector regulation.

The only question is whether these scattered fights become a coherent system of contestability, or whether private architecture settles the transition first.

The doctrine is narrow: when AI changes power, make that change visible, contestable, negotiable, and taxable where it produces concentrated rents.

Full text

The Contestable AI Transition Doctrine

Making AI power visible, negotiable, and accountable before it hardens.

1. Define capture precisely

A transition is not captured merely because firms profit, industries concentrate, or incumbents influence regulation. Those are common features of market economies.

A transition becomes captured when technological advantage turns into durable social dependency under rules shaped by the beneficiaries themselves.

Capture exists when four conditions converge:

  1. Durable concentration — gains accumulate in a small set of firms, owners, platforms, or infrastructure providers and do not erode through competition.
  2. Dependency without exit — workers, consumers, firms, or public agencies become dependent on systems they cannot realistically leave, replace, audit, or contest.
  3. Weak voice or remedy — affected groups lack bargaining rights, appeal rights, transparency, or institutional representation.
  4. Rule-setting influence — the actors benefiting from the transition gain disproportionate influence over standards, procurement, liability, tax, labor rules, or enforcement.

These conditions should not operate as a strict all-or-nothing test. Capture may exist when some conditions are extreme even if others are partial. A sector may have nominal exit but no realistic exit. It may have formal voice but no effective remedy. It may have competition at the surface but dependency underneath.

The stronger the concentration, dependency, weak remedy, and rule-setting influence, the stronger the case for intervention.

That is the AI-specific danger: not disruption alone, but disruption that becomes infrastructure before democratic institutions can challenge it.

2. Make consequential AI visible without burying small firms

Visibility must be proportional.

A small firm using AI to draft emails or summarize documents should not face the same burden as a hospital using AI in triage, a bank using AI in credit decisions, a platform using AI to manage workers, or a logistics group using AI to redesign thousands of jobs.

AI disclosure should be tiered:

Low-risk use requires minimal obligations.

Workplace-transforming use requires notice, consultation, and an impact description.

Rights-affecting use requires auditability, appeal, accountable officers, and external review.

Systemic use requires sector regulator oversight, public reporting, competition review, and procurement scrutiny where public money is involved.

The most important battleground is the middle tier: workplace-transforming AI. This is where most real conflict will occur.

For that tier, the minimum rule should be:

  • advance notice before AI materially changes staffing, monitoring, workload, promotion, discipline, pay, or required skills;
  • an impact statement explaining which workflows change and which roles are affected;
  • consultation with workers, unions, works councils, or employee representatives where they exist;
  • written management response to objections;
  • transition measures where displacement or deskilling is plausible;
  • stronger review when changes are difficult to reverse.

This does not give workers a veto over every tool. It does give them standing when AI changes the conditions under which they work.

3. Act early, but admit the asymmetry

Policy cannot wait for perfect proof. By the time the labor-market effect is uncontested, the workers may be gone, junior pathways may be broken, and firms may already be reorganized around automated systems.

But early intervention has its own danger. It can protect incumbents, freeze obsolete work, and block useful productivity.

The answer is not blind precaution. It is asymmetric precaution.

Regulatory interventions should be reversible where possible. But the doctrine must admit that the underlying harms are often not reversible. A displaced fifty-two-year-old professional does not simply “undisplace.” A generation that never entered a profession cannot be recovered later. A public agency that dismantles internal capacity after adopting a proprietary AI system may not be able to rebuild it quickly.

The more irreversible the potential harm, the lower the threshold for temporary safeguards. The more reversible the harm, the stronger the case for waiting, measuring, and adapting.

Both sides will claim irreversibility. Workers may argue that career destruction, deskilling, and loss of professional entry paths cannot be repaired later. AI firms may argue that delayed deployment also causes irreversible harm: missed diagnoses, worse education, denied access, slower services, or avoidable deaths.

The appropriate forum must compare harms explicitly:

  • Who is harmed?
  • How irreversible is the harm?
  • How many people are affected?
  • Is the AI system replacing capacity or adding capacity?
  • Are there less restrictive safeguards?
  • Can the intervention sunset automatically?
  • Who bears the burden if the prediction is wrong?

That is the difference between prudence and protectionism.

4. Govern outcomes first, legitimacy second, and know when they conflict

Most AI policy should focus on outcomes: safety, quality, discrimination, access, accountability, wages, displacement, market concentration, and surveillance.

But some domains require more than outcome measurement. Where AI affects liberty, coercion, bodily integrity, democratic participation, or fundamental rights, legitimacy matters even when accuracy improves.

A diagnostic AI may outperform clinicians in a narrow task. A policing model may identify patterns humans miss. A welfare model may be more consistent than individual caseworkers. These facts matter.

But accuracy does not exhaust legitimacy.

Where decisions affect rights, life, liberty, care, or democratic standing, there must be a human office, professional, judge, clinician, or public authority capable of explanation, override, appeal, and responsibility.

This creates a real tension. Sometimes democratic legitimacy will slow a technically superior system. Sometimes technocratic speed will produce better immediate outcomes. The doctrine does not pretend this conflict disappears.

It says only this: when power is coercive, intimate, or rights-defining, people must be able to contest it through human institutions.

5. Protect formation, not inherited routines

The old career ladder should not be frozen.

AI will change how people learn. Young lawyers, doctors, engineers, accountants, architects, analysts, and coders may not need to train exactly as their seniors did.

But society cannot allow firms to consume inherited expertise while destroying the pathways that produce future expertise.

Where AI eliminates junior work, the sector must show how new workers become competent.

That may require supervised AI-assisted practice, simulation, apprenticeships, certification, rotations, mentorship, public training centers, or protected entry-level roles.

The goal is not to preserve old routines.

The goal is to prevent a competence cliff.

6. Share gains conditionally and honestly

The earlier language of a “human dividend” is too broad if treated as a promise.

A dividend is justified only if AI produces large, durable, politically collectible rents. That may happen in some sectors. It may not happen everywhere. Competition may push gains to consumers through lower prices. Open models may erode rents. Some deployments may improve quality without producing a large taxable surplus.

So the fiscal rule should be conditional.

If AI produces broad competition and lower prices, the public gain may flow mainly through consumers.

If AI produces concentrated rents, those rents should be taxed.

If AI substitutes for labor, transition support comes before universal dividends.

Priority should be:

  1. transition support for directly affected workers;
  2. formation and retraining systems;
  3. portable benefits and wage insurance;
  4. broader public services;
  5. dividends only if the rent base is large, stable, and collectible.

Transition support should come first, but it cannot be unlimited. If support is open-ended, it can consume the entire fiscal base and leave nothing for formation, wage insurance, public services, or broader social guarantees.

So transition policy should distinguish:

  • short-term income replacement;
  • retraining or redeployment support;
  • wage insurance for downward mobility;
  • regional adjustment support;
  • retirement bridges for older displaced workers;
  • sector-specific formation funds.

The goal is not to compensate every loss forever. The goal is to prevent displacement from becoming social abandonment.

This is less dramatic than a manifesto. It is more honest.

7. Build freedom-compatible dependency

All economic systems create dependency.

Wage labor creates dependency on employers.
Debt creates dependency on creditors.
Platforms create dependency on infrastructure owners.
Markets create dependency on purchasing power.
State support creates dependency on public institutions.

The political question is not whether dependency exists. It is which dependencies preserve dignity, reciprocity, contribution, exit, and voice.

This is why state-mediated security cannot be designed as passive compensation for people deemed economically unnecessary. That would be politically fragile and morally corrosive.

A stronger settlement must preserve contribution where possible: through shorter workweeks, public employment, care work, civic service, local infrastructure, training, creative work, and portable benefits that allow people to refuse exploitation.

But the hardest case must be faced directly. If AI reduces aggregate labor demand enough that not everyone’s labor is needed in the market, then dignity cannot remain conditional on market usefulness.

In that world, some income must be unconditional.

That is dependency, but not all dependency is domination. A person with a secure public floor, exit rights, political voice, and access to public services may be less dominated than a person forced to accept any employer’s terms to survive.

The aim is not to eliminate dependency.

The aim is to make dependency compatible with freedom.

8. Prevent corporate capture and state capture

Redistribution is not automatically democratic.

If AI firms become powerful enough, they will influence the state that taxes, regulates, procures, and redistributes their gains. Public institutions can be captured too.

So anti-capture policy cannot rely only on formal rules. It needs countervailing power.

That means:

  • unions and professional associations with real membership;
  • adversarial journalism;
  • public-interest litigation;
  • state attorneys general and local regulators;
  • independent technical auditors;
  • whistleblower protection with real enforcement;
  • transparent procurement;
  • revolving-door restrictions;
  • public registers of AI lobbying;
  • academic access to study deployed systems;
  • competition enforcement with adequate staffing;
  • citizen appeal rights where AI affects public decisions.

Guardrails do not maintain themselves. They are maintained by organized actors with resources, standing, and incentives to fight.

Without that, anti-capture institutions become decorative.

This is larger than AI governance. AI exposes a broader problem: many democracies have weakened the institutions that make capture contestable.

9. Build plural AI infrastructure

Plural AI infrastructure means no society should depend on one model provider, one cloud, one chip supply chain, one data platform, or one closed ecosystem.

This requires instruments, not slogans:

  • public or publicly accessible compute capacity;
  • procurement rules that prevent vendor lock-in;
  • interoperability and portability requirements;
  • open standards for model switching and audit logs;
  • public-interest models for education, health, law, and public administration;
  • data trusts or data commons where appropriate;
  • academic and civil-society access to evaluate major systems;
  • support for open-weight models where they increase competition and resilience;
  • restrictions where openness enables serious harm, coercion, fraud, or abuse.

Open models are not automatically democratic. Closed models are not automatically illegitimate.

The test is whether the infrastructure increases contestability, resilience, competition, local capacity, and public oversight.

If intelligence becomes privately gated infrastructure, every sector becomes dependent on a few tollbooths.

Plural infrastructure is how that dependency is resisted.

10. Make fragmented governance learn

There will be no single grand AI settlement.

AI will be governed through sectors: employment, healthcare, finance, education, logistics, public administration, defense, media, insurance, and law.

That is realistic. It is also dangerous, because sector-by-sector governance often becomes sector-by-sector capture.

So fragmentation needs a coordination layer.

Not one super-regulator, but a learning system:

  • model laws;
  • shared audit standards;
  • cross-sector regulatory forums;
  • public AI observatories;
  • litigation networks;
  • academic monitoring;
  • procurement templates;
  • common definitions for accountability, explainability, and appeal;
  • regular reporting from sector regulators to legislatures;
  • public databases of AI failures, disputes, and enforcement actions.

Some of these tools can use existing institutions: courts, regulators, attorneys general, procurement offices, professional boards.

Other tools require new capacity: AI observatories, public failure databases, model-law networks, technical audit standards, litigation support networks, and cross-sector regulatory forums.

Coherent fragmentation is not automatic. It is an institution-building project.

The task is not to eliminate fragmentation.

The task is to make fragmentation cumulative rather than chaotic.

11. Name the agents

Mechanisms do not act. Institutions and people act.

The likely agents of AI governance are not one mass coalition. They are many smaller actors:

state attorneys general, labor unions in exposed sectors, public-interest lawyers, municipal procurement officers, insurance regulators, medical boards, teachers’ unions, writers’ and actors’ guilds, nursing associations, employment agencies, civil-rights litigators, competition authorities, courts, auditors, standards bodies, and sector regulators.

This doctrine is not a call for one revolution.

It is a map for many institutional fights.

Each fight will be narrower than the whole problem. Together they determine whether AI becomes contestable public infrastructure or unchallengeable private architecture.

12. Prefer democratic governance, while admitting its cost

Democracy is not magic.

Democracies can protect incumbents, exaggerate risks, underreact to harms, and make bad technology choices. Technocratic governance can sometimes be faster and more coherent.

But AI will reshape work, rights, public services, knowledge, and power. These are not merely technical questions. They are questions about how people live under institutions.

The cost of democracy is slowness, conflict, and imperfection.

The cost of excluding democracy is rule by systems people cannot contest.

This is the deepest tension in the doctrine: democratic governance may move too slowly, while AI deployment moves quickly. There is no clean theoretical solution.

The practical answer is to accelerate democratic capacity rather than bypass democracy entirely:

  • stronger regulators;
  • faster courts for AI-related harms;
  • emergency review powers with real sunset clauses;
  • public procurement leverage;
  • standing for affected groups before systems become irreversible;
  • public technical expertise;
  • legislative review of high-impact systems.

This is difficult. It may fail. But the alternative is worse: private architecture settling public questions before the public has acted.

Democracy must become faster without becoming decorative.

13. Embed public rules before private architecture becomes destiny

The problem is not institutionalization itself.

Some things should become institutional fact: appeal rights, audit rights, liability rules, interoperability standards, formation duties, procurement safeguards, and competition norms.

The danger is allowing private AI architecture to settle the political question before public rules exist.

Once workflows are redesigned, data locked in, vendors entrenched, junior roles erased, and public agencies made dependent on proprietary systems, reversal becomes difficult.

So the goal is not to keep AI from embedding.

The goal is to embed public rules first.

Closing thesis

AI governance will happen anyway — through courts, procurement rules, labor disputes, liability rules, professional standards, antitrust cases, tax fights, public scandals, and sector regulation.

The only question is whether these scattered fights become a coherent system of contestability, or whether private architecture settles the transition first.

When AI changes power, make that change visible, contestable, negotiable, and taxable where it produces concentrated rents.

It does not solve every implementation problem. It names the fights that must be made institutional before AI dependency hardens beyond democratic reach.

The purpose is not to stop intelligence from becoming artificial.

The purpose is to stop artificial intelligence from becoming unaccountable power.

Human beings are not obsolete.

Human beings are the purpose.

Institutional toolkits

Who can use this doctrine?

The doctrine is written for institutions already being forced to govern AI: regulators, attorneys general, courts, procurement officials, unions, professional associations, public-interest lawyers, and policy organizations.

Workers & unions

Use the doctrine as a bargaining framework.

  • AI notice clauses
  • Surveillance limits
  • Retraining rights
  • Productivity-sharing language

Regulators & AGs

Use the doctrine as an enforcement map.

  • Rights-affecting AI review
  • Appeal and remedy standards
  • Failure databases
  • Consumer protection hooks

Procurement officials

Use the doctrine as a vendor checklist.

  • Audit rights
  • Data portability
  • Vendor exit clauses
  • Accountable officer requirements

Use cases

Where the doctrine can be tested

AI in workplace surveillance

Does the system change monitoring, discipline, workload, promotion, or pay? Workers need notice, limits, and standing.

AI in public procurement

Does the vendor create lock-in? Contracts should require audit rights, portability, exit clauses, and accountable officers.

AI in healthcare triage

Does the system improve outcomes while preserving appeal, clinical responsibility, and patient trust?

AI in hiring and screening

Does the system affect access to opportunity? It should be explainable, reviewable, and contestable.

AI in logistics operations

Does the tool redesign staffing, routing, workload, or exception handling? The Tier 2 workplace-transforming rules apply.

AI vendor lock-in

Does a public agency lose internal capacity or become dependent on one provider? Plural infrastructure and exit rights matter.

Human beings are not obsolete. Human beings are the purpose.