Takeover Tracker

Want vs. Get: Where Workers Actually Want AI

The hype framing says workers fear AI automation. The data says the opposite — most workers want AI to take routine tasks off their plate. Shao et al. surveyed 1,500 workers across 104 occupations, rating 844 tasks on two dimensions: how much workers want AI to handle them, and whether AI actually can. The four-zone grid below is what dropped out.

Key finding: 46.1% of surveyed tasks have workers who actively want AI to take them over — the “Green Light” and “R&D Opportunity” zones. Yet only 1.26% of Claude.ai usage today lands on the occupations with the highest automation-desire scores, and 41% of Y Combinator-funded AI agent investments concentrate in the “Red Light” + “Low Priority” quadrants. The AI economy is pointed at the wrong tasks.

46.1%
Tasks where workers want AI
positive automation-desire rating
1.26%
Claude.ai usage on top-desire occupations
top 10 highest-desire occupations
41%
YC AI agent investment misaligned
Red Light + Low Priority zones
844
Tasks rated
104 occupations · 1,500 workers

The desire × capability landscape

844 tasks · 104 occupations · 1,500 workers
Source: Shao et al. (2026), WORKBank database, arXiv:2506.06576·AI Takeover Tracker (www.aitakeovertracker.com)
Worker desire for automation →
Low AI capability
High AI capability
High desire · Low capability

R&D Opportunity

Workers want AI help, but today’s models can’t reliably deliver. The frontier where new capability unlocks genuine worker-welfare gains.

Where AI research pays off. Solving these tasks is both wanted and valuable — the opposite of the hype-driven investment pattern.

Example tasks
  • Debugging complex distributed systems
  • Drafting defensible legal arguments end-to-end
  • Planning multi-step fieldwork around changing conditions
High desire · High capability

Automation “Green Light”

Workers want AI to handle these tasks, and current models already can. Prime candidates for deployment that create both productivity gains and worker welfare.

Low-friction wins. Rolling AI out here frees human time for higher-value work rather than triggering resistance.

Example tasks
  • Transcribing and summarizing meeting notes
  • Drafting boilerplate emails and status updates
  • Categorizing and tagging routine records
Low desire · Low capability

Low Priority

Workers don’t want automation and AI can’t provide it anyway. Not worth building for — yet capital still flows here alongside the Red Light zone.

Safe for now. These are the tasks least disrupted by near-term AI deployment.

Example tasks
  • In-person mentoring and team coaching
  • Navigating organizational politics
  • Creative ideation with deep context
Low desire · High capability

Automation “Red Light”

AI can do the task, but workers don’t want it to. Still, 41% of Y Combinator‐funded AI agent investments concentrate here or in the Low-Priority zone.

The zone where displacement resistance is highest. Deploying here invites pushback, quality backlash, and regulatory risk.

Example tasks
  • Interpreting patient emotional cues
  • Making hiring and performance-review judgments
  • Representing clients in sensitive client-facing roles
AI capability →

Why this matters for AI job risk

Our Capability Coverage Index scores AI's ability to perform occupational tasks. The Shao et al. framework adds the second axis Takeover Tracker hasn't measured directly: whether workers in those occupations actually want that help. A task sitting in the “Green Light” zone faces different labor-market pressure than one sitting in the “Red Light” zone — even if our CCI scores them identically on capability.

The practical takeaway: the tasks you want AI to take are usually not the tasks your employer will pay AI to take. Worker-welfare-aligned automation (Green Light) is a rounding error in today's AI usage patterns. Investment capital is chasing capability-feasibility first, worker-demand second. That's why the labor-market impact in our Age Gap insight is showing up as displacement rather than augmentation for early-career workers — the automation that arrives first is often the Red Light kind.

The framework also reframes “AI risk.” A high-capability, high-desire task is technically automatable but low-stress for the worker — it's the part of the job they wanted to hand off anyway. A high-capability, low-desire task is the one that actually displaces identity, meaning, and livelihood. Task-level AI risk needs both axes to tell the full story.

Sources

  • Shao, Chen, Zhao, Tomlinson, Patel et al. (2026), “Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce,” arXiv:2506.06576. Introduces the WORKBank database — 1,500 domain workers paired with AI-expert capability assessments across 844 O*NET tasks in 104 occupations — and defines a Human Agency Scale (H1–H5) for each task.
  • The 1.26% Claude.ai usage figure is measured on conversations from December 2024 – January 2025, comparing the top-10 highest-automation-desire occupations to the overall usage distribution. The 41% startup-investment figure is the share of Y Combinator-funded AI agent company-task mappings that fall into the Red Light and Low Priority quadrants combined, per the paper's analysis of YC cohort data.
  • Example tasks in each quadrant above are illustrative, drawn from the O*NET task categories emphasized in the paper's worked examples; they are not a verbatim list of tasks from the paper's published tables.
  • Our Capability Coverage Index uses its own task-level scoring grounded in O*NET tasks and Anthropic's observed Claude usage data. See our full methodology for details.