Visible
Consequential AI systems should not reshape work, rights, or public services invisibly.
A public doctrine for the AI transition
AI will not simply arrive as a tool. It will arrive as power: power over work, wages, training, public services, markets, and rights.
The question is not whether AI should be used. It will be. The question is whether the people affected by AI systems can see them, challenge them, negotiate over them, and hold someone responsible when they cause harm.
Core principles
The purpose of AI governance is not to freeze progress. It is to prevent technological transition from becoming unaccountable power.
Consequential AI systems should not reshape work, rights, or public services invisibly.
People affected by AI decisions must have appeal, remedy, explanation, and responsible institutions.
Workplace-transforming AI should give workers standing, notice, consultation, and bargaining leverage.
Where AI produces concentrated rents, those gains should fund transition, formation, and public resilience.
The short version
It will happen through lawsuits, procurement rules, labor disputes, liability rules, professional standards, antitrust cases, tax fights, public scandals, and sector regulation.
The only question is whether these scattered fights become a coherent system of contestability, or whether private architecture settles the transition first.
The doctrine is narrow: when AI changes power, make that change visible, contestable, negotiable, and taxable where it produces concentrated rents.
Full text
Making AI power visible, negotiable, and accountable before it hardens.
A transition is not captured merely because firms profit, industries concentrate, or incumbents influence regulation. Those are common features of market economies.
A transition becomes captured when technological advantage turns into durable social dependency under rules shaped by the beneficiaries themselves.
Capture exists when four conditions converge:
These conditions should not operate as a strict all-or-nothing test. Capture may exist when some conditions are extreme even if others are partial. A sector may have nominal exit but no realistic exit. It may have formal voice but no effective remedy. It may have competition at the surface but dependency underneath.
The stronger the concentration, dependency, weak remedy, and rule-setting influence, the stronger the case for intervention.
That is the AI-specific danger: not disruption alone, but disruption that becomes infrastructure before democratic institutions can challenge it.
Visibility must be proportional.
A small firm using AI to draft emails or summarize documents should not face the same burden as a hospital using AI in triage, a bank using AI in credit decisions, a platform using AI to manage workers, or a logistics group using AI to redesign thousands of jobs.
AI disclosure should be tiered:
Low-risk use requires minimal obligations.
Workplace-transforming use requires notice, consultation, and an impact description.
Rights-affecting use requires auditability, appeal, accountable officers, and external review.
Systemic use requires sector regulator oversight, public reporting, competition review, and procurement scrutiny where public money is involved.
The most important battleground is the middle tier: workplace-transforming AI. This is where most real conflict will occur.
For that tier, the minimum rule should be:
This does not give workers a veto over every tool. It does give them standing when AI changes the conditions under which they work.
Policy cannot wait for perfect proof. By the time the labor-market effect is uncontested, the workers may be gone, junior pathways may be broken, and firms may already be reorganized around automated systems.
But early intervention has its own danger. It can protect incumbents, freeze obsolete work, and block useful productivity.
The answer is not blind precaution. It is asymmetric precaution.
Regulatory interventions should be reversible where possible. But the doctrine must admit that the underlying harms are often not reversible. A displaced fifty-two-year-old professional does not simply “undisplace.” A generation that never entered a profession cannot be recovered later. A public agency that dismantles internal capacity after adopting a proprietary AI system may not be able to rebuild it quickly.
The more irreversible the potential harm, the lower the threshold for temporary safeguards. The more reversible the harm, the stronger the case for waiting, measuring, and adapting.
Both sides will claim irreversibility. Workers may argue that career destruction, deskilling, and loss of professional entry paths cannot be repaired later. AI firms may argue that delayed deployment also causes irreversible harm: missed diagnoses, worse education, denied access, slower services, or avoidable deaths.
The appropriate forum must compare harms explicitly:
That is the difference between prudence and protectionism.
Most AI policy should focus on outcomes: safety, quality, discrimination, access, accountability, wages, displacement, market concentration, and surveillance.
But some domains require more than outcome measurement. Where AI affects liberty, coercion, bodily integrity, democratic participation, or fundamental rights, legitimacy matters even when accuracy improves.
A diagnostic AI may outperform clinicians in a narrow task. A policing model may identify patterns humans miss. A welfare model may be more consistent than individual caseworkers. These facts matter.
But accuracy does not exhaust legitimacy.
Where decisions affect rights, life, liberty, care, or democratic standing, there must be a human office, professional, judge, clinician, or public authority capable of explanation, override, appeal, and responsibility.
This creates a real tension. Sometimes democratic legitimacy will slow a technically superior system. Sometimes technocratic speed will produce better immediate outcomes. The doctrine does not pretend this conflict disappears.
It says only this: when power is coercive, intimate, or rights-defining, people must be able to contest it through human institutions.
The old career ladder should not be frozen.
AI will change how people learn. Young lawyers, doctors, engineers, accountants, architects, analysts, and coders may not need to train exactly as their seniors did.
But society cannot allow firms to consume inherited expertise while destroying the pathways that produce future expertise.
Where AI eliminates junior work, the sector must show how new workers become competent.
That may require supervised AI-assisted practice, simulation, apprenticeships, certification, rotations, mentorship, public training centers, or protected entry-level roles.
The goal is not to preserve old routines.
The goal is to prevent a competence cliff.
The earlier language of a “human dividend” is too broad if treated as a promise.
A dividend is justified only if AI produces large, durable, politically collectible rents. That may happen in some sectors. It may not happen everywhere. Competition may push gains to consumers through lower prices. Open models may erode rents. Some deployments may improve quality without producing a large taxable surplus.
So the fiscal rule should be conditional.
If AI produces broad competition and lower prices, the public gain may flow mainly through consumers.
If AI produces concentrated rents, those rents should be taxed.
If AI substitutes for labor, transition support comes before universal dividends.
Priority should be:
Transition support should come first, but it cannot be unlimited. If support is open-ended, it can consume the entire fiscal base and leave nothing for formation, wage insurance, public services, or broader social guarantees.
So transition policy should distinguish:
The goal is not to compensate every loss forever. The goal is to prevent displacement from becoming social abandonment.
This is less dramatic than a manifesto. It is more honest.
All economic systems create dependency.
Wage labor creates dependency on employers.
Debt creates dependency on creditors.
Platforms create dependency on infrastructure owners.
Markets create dependency on purchasing power.
State support creates dependency on public institutions.
The political question is not whether dependency exists. It is which dependencies preserve dignity, reciprocity, contribution, exit, and voice.
This is why state-mediated security cannot be designed as passive compensation for people deemed economically unnecessary. That would be politically fragile and morally corrosive.
A stronger settlement must preserve contribution where possible: through shorter workweeks, public employment, care work, civic service, local infrastructure, training, creative work, and portable benefits that allow people to refuse exploitation.
But the hardest case must be faced directly. If AI reduces aggregate labor demand enough that not everyone’s labor is needed in the market, then dignity cannot remain conditional on market usefulness.
In that world, some income must be unconditional.
That is dependency, but not all dependency is domination. A person with a secure public floor, exit rights, political voice, and access to public services may be less dominated than a person forced to accept any employer’s terms to survive.
The aim is not to eliminate dependency.
The aim is to make dependency compatible with freedom.
Redistribution is not automatically democratic.
If AI firms become powerful enough, they will influence the state that taxes, regulates, procures, and redistributes their gains. Public institutions can be captured too.
So anti-capture policy cannot rely only on formal rules. It needs countervailing power.
That means:
Guardrails do not maintain themselves. They are maintained by organized actors with resources, standing, and incentives to fight.
Without that, anti-capture institutions become decorative.
This is larger than AI governance. AI exposes a broader problem: many democracies have weakened the institutions that make capture contestable.
Plural AI infrastructure means no society should depend on one model provider, one cloud, one chip supply chain, one data platform, or one closed ecosystem.
This requires instruments, not slogans:
Open models are not automatically democratic. Closed models are not automatically illegitimate.
The test is whether the infrastructure increases contestability, resilience, competition, local capacity, and public oversight.
If intelligence becomes privately gated infrastructure, every sector becomes dependent on a few tollbooths.
Plural infrastructure is how that dependency is resisted.
There will be no single grand AI settlement.
AI will be governed through sectors: employment, healthcare, finance, education, logistics, public administration, defense, media, insurance, and law.
That is realistic. It is also dangerous, because sector-by-sector governance often becomes sector-by-sector capture.
So fragmentation needs a coordination layer.
Not one super-regulator, but a learning system:
Some of these tools can use existing institutions: courts, regulators, attorneys general, procurement offices, professional boards.
Other tools require new capacity: AI observatories, public failure databases, model-law networks, technical audit standards, litigation support networks, and cross-sector regulatory forums.
Coherent fragmentation is not automatic. It is an institution-building project.
The task is not to eliminate fragmentation.
The task is to make fragmentation cumulative rather than chaotic.
Mechanisms do not act. Institutions and people act.
The likely agents of AI governance are not one mass coalition. They are many smaller actors:
state attorneys general, labor unions in exposed sectors, public-interest lawyers, municipal procurement officers, insurance regulators, medical boards, teachers’ unions, writers’ and actors’ guilds, nursing associations, employment agencies, civil-rights litigators, competition authorities, courts, auditors, standards bodies, and sector regulators.
This doctrine is not a call for one revolution.
It is a map for many institutional fights.
Each fight will be narrower than the whole problem. Together they determine whether AI becomes contestable public infrastructure or unchallengeable private architecture.
Democracy is not magic.
Democracies can protect incumbents, exaggerate risks, underreact to harms, and make bad technology choices. Technocratic governance can sometimes be faster and more coherent.
But AI will reshape work, rights, public services, knowledge, and power. These are not merely technical questions. They are questions about how people live under institutions.
The cost of democracy is slowness, conflict, and imperfection.
The cost of excluding democracy is rule by systems people cannot contest.
This is the deepest tension in the doctrine: democratic governance may move too slowly, while AI deployment moves quickly. There is no clean theoretical solution.
The practical answer is to accelerate democratic capacity rather than bypass democracy entirely:
This is difficult. It may fail. But the alternative is worse: private architecture settling public questions before the public has acted.
Democracy must become faster without becoming decorative.
The problem is not institutionalization itself.
Some things should become institutional fact: appeal rights, audit rights, liability rules, interoperability standards, formation duties, procurement safeguards, and competition norms.
The danger is allowing private AI architecture to settle the political question before public rules exist.
Once workflows are redesigned, data locked in, vendors entrenched, junior roles erased, and public agencies made dependent on proprietary systems, reversal becomes difficult.
So the goal is not to keep AI from embedding.
The goal is to embed public rules first.
AI governance will happen anyway — through courts, procurement rules, labor disputes, liability rules, professional standards, antitrust cases, tax fights, public scandals, and sector regulation.
The only question is whether these scattered fights become a coherent system of contestability, or whether private architecture settles the transition first.
When AI changes power, make that change visible, contestable, negotiable, and taxable where it produces concentrated rents.
It does not solve every implementation problem. It names the fights that must be made institutional before AI dependency hardens beyond democratic reach.
The purpose is not to stop intelligence from becoming artificial.
The purpose is to stop artificial intelligence from becoming unaccountable power.
Human beings are not obsolete.
Human beings are the purpose.
Institutional toolkits
The doctrine is written for institutions already being forced to govern AI: regulators, attorneys general, courts, procurement officials, unions, professional associations, public-interest lawyers, and policy organizations.
Use the doctrine as a bargaining framework.
Use the doctrine as an enforcement map.
Use the doctrine as a vendor checklist.
Use cases
Does the system change monitoring, discipline, workload, promotion, or pay? Workers need notice, limits, and standing.
Does the vendor create lock-in? Contracts should require audit rights, portability, exit clauses, and accountable officers.
Does the system improve outcomes while preserving appeal, clinical responsibility, and patient trust?
Does the system affect access to opportunity? It should be explainable, reviewable, and contestable.
Does the tool redesign staffing, routing, workload, or exception handling? The Tier 2 workplace-transforming rules apply.
Does a public agency lose internal capacity or become dependent on one provider? Plural infrastructure and exit rights matter.
Human beings are not obsolete. Human beings are the purpose.