The Agent Gap: Why the Biggest AI Opportunity Isn't in Engineering
Anthropic just mapped millions of agent interactions. The biggest finding isn't what agents can do, it's where they aren't.
Anthropic just published a study on millions of AI agent interactions. The finding that stopped me wasn’t about what agents can do. It was about where they are.
50% of all AI agents are deployed for software engineering.
Not 50% of tech companies. Not 50% of Silicon Valley. Half of all agents, everywhere. We built agents to write code, review code, test code, deploy code. And then we built more agents to write more code.
Meanwhile, doctors still fill out forms by hand. Lawyers still review contracts line by line. Supply chain managers still dig through thousands of emails looking for a rejected quotation worth six figures.
Here’s the thing: the biggest opportunity in AI isn’t making better coding agents. It’s everything else.
Why Engineering Got There First
It makes sense when you think about it. Code is structured. Code is reversible. If an agent writes bad code, you delete it and try again. If an agent files the wrong legal brief, someone’s in real trouble.
Anthropic’s data confirms this: only 0.8% of agent actions in software engineering are irreversible. It’s the safest playground imaginable. Low stakes, high iteration speed, fast feedback loops.
And engineers build tools for themselves first. Always have. The cobbler’s children finally have shoes.
But safe doesn’t mean only.
The Agent Gap
Here’s what the research actually reveals when you look past the engineering cluster: healthcare, finance, legal, supply chain, cybersecurity, they’re all showing up in the data. Barely. Tiny clusters. Sparse activity.
Not because agents can’t work there. Because nobody’s built the bridges yet.
Think about what engineering agents actually do: read structured data, find patterns, take action, check results. That’s not a coding pattern. That’s a work pattern.
A healthcare scheduling agent reads patient data, finds open slots, books appointments, confirms with patients. A legal research agent reads case law, surfaces relevant precedents, flags conflicts. A supply chain agent reads thousands of procurement emails, spots rejected items worth re-quoting, surfaces revenue that was invisible before.
Same pattern. Different domain. The gap isn’t capability. It’s deployment.
The Guardrails Are Already Proven
This is the part most people miss. The safety infrastructure for agents in high-stakes environments isn’t hypothetical. It’s already been proven, in engineering, at scale.
Anthropic’s numbers:
80% of agent tool calls include safeguards like restricted permissions
73% have human-in-the-loop involvement
Agents initiate clarification requests more often than humans interrupt them, especially on complex tasks
The research calls autonomy “co-constructed.” It emerges from three things: the model’s behavior, the user’s oversight strategy, and the product’s design. It’s not a dial you crank to “fully autonomous.” It’s a relationship you calibrate.
That’s exactly what high-stakes industries need. Not AI that replaces doctors or lawyers. Agents that work within guardrails, ask when they’re unsure, and keep humans in control of the decisions that matter.
Graduated permissions. Monitoring. Intervention points. Engineers built this playbook over the last two years. It’s sitting right there, ready to transfer.
Where the Real Value Lives
The next wave of AI companies won’t come from building a better Copilot. That market is crowded and getting more crowded by the day.
The next wave comes from people who understand two things at once: how agents work, and how a non-engineering industry works.
An engineer who spent five years in healthcare and understands agents will build something a pure AI researcher or a pure hospital administrator never could. A supply chain expert who’s seen how coding agents operate will recognize the exact same patterns in procurement workflows.
The intersection is where the value is.
And here’s the thing most engineers don’t realize: you already have this. Years of domain experience from previous jobs, consulting gigs, family businesses, side projects. That experience felt like career history. Now it looks like a competitive advantage.
The Window
Anthropic’s study is a snapshot of early 2026. The agent gap is wide open right now. Software engineering sitting at 50%, everything else barely registering. But gaps close. They always do. The question is who closes them.
If you’re an engineer, you already understand agents better than 95% of the working world. You’ve used them. You’ve calibrated trust with them. You know where they excel and where they break.
The question isn’t whether agents will expand beyond engineering. They will. The question is whether you’ll be the one building the bridges.
What industry do you understand deeply that agents haven’t touched yet?
That gap between your domain knowledge and agent capability isn’t a coincidence. It’s your opportunity.
I’m putting together a deeper breakdown of agent opportunity across 10 industries, with specific use cases, readiness signals, and the engineering patterns that transfer to each. If you want the full Agent Gap Playbook when it’s ready, subscribe and I’ll send it your way.



