More AI Agents, More Human Work: The Unforeseen Paradox

More AI Agents, More Human Work: The Unforeseen Paradox

Automation with AI agents doesn't free cognitive time; it redistributes it. Box's CEO diagnoses this with clarity that many executives have yet to grasp.

Clara MontesClara MontesApril 5, 20267 min
Share

More AI Agents, More Human Work: The Unforeseen Paradox

A promise reverberates in nearly every product presentation touting artificial intelligence since 2023: deploy enough autonomous agents, and your teams will finally focus on the work that matters. Automation will handle the rest. The problem with this promise is that it assumes cognitive work is a fixed volume that can be delegated. It is not.

Aaron Levie, CEO of Box, articulates this with a clarity that few voices in the industry have dared embrace: orchestrating multiple AI agents does not eliminate the human cognitive load; it merely transforms it. Where there was previously execution work, there is now oversight, coordination, and decision-making about systems operating at speeds the human brain cannot follow in real time. The net result is not less effort. It is different effort, and in many cases, more demanding.

The Illusion of the Freed Manager

When an organization deploys one AI agent to manage document flows, another to analyze contracts, and a third to monitor regulatory compliance, the immediate question that the executive committee should ask is not how much time each agent saves individually. The correct question is who coordinates the three when their outputs contradict one another, when one detects an anomaly that the other two overlook, or under what human criteria it is decided which agent is right.

That is not a technical question. It is a governance question, and it falls on people.

The pattern described by Levie has a precise mechanism: as the number of agents grows, the complexity of their orchestration grows non-linearly. Two agents require a supervisory interface. Five agents need a protocol. Twenty agents require something resembling a parallel organizational structure, with its own hierarchies, escalation rules, and performance metrics. Someone needs to design that structure. Someone needs to maintain it. And when it fails, someone has to be accountable.

The companies that are painfully discovering this are precisely those that adopted agents under the logic of reducing headcount before understanding what real work they were eliminating and what new work they were creating. They purchased automation thinking they were buying simplicity. They obtained scale with built-in complexity.

What Got Automated Wasn’t the Problem

This diagnosis is the most discomforting for product teams and digital transformation committees: most tasks that AI agents execute efficiently were not the tasks creating the most costly bottlenecks in the organization.

Agents excel at processing volume: sorting documents, extracting structured data, drafting documents under known templates. These tasks are measurable, repeatable, and easy to evaluate. They are also, in many contexts, tasks that employees had already learned to execute quickly and with few errors. The work that truly consumes executive energy—work involving judgment under uncertainty, negotiation among parties with conflicting interests, or decisions without clear precedent—cannot be delegated to an agent. And yet, it is precisely this work that multiplies when there are more agents to supervise.

The company that hires AI agents to free its top performers ends up, paradoxically, with those performers dedicated to monitoring machines instead of solving business problems. Displacement occurs, but in the direction opposite to what was promised.

This doesn’t mean that adopting agents is a strategic mistake. It means that the measure of success has been poorly calibrated from the start. A company measuring the return on its agents in saved man-hours is measuring the wrong metric. The relevant metric is how much high-value cognitive work was unlocked for humans, not how much low-value work was absorbed by machines.

The Work That No One Was Hiring For

There is an organizational behavior pattern that this situation reveals sharply. When companies adopt AI agents, the work they declare wanting to eliminate is operational and repetitive. However, the work that they really need someone to do, and which no one has articulated clearly until the system fails, is the work of maintaining consistency among distributed decisions in real time.

That work has no name in any organizational chart. It is not budgeted as a specific function. And yet, when a chain of agents makes one hundred micro-decisions per hour on behalf of the company, someone must ensure those decisions are coherent with each other, do not contradict business policy, do not expose the organization to regulatory risk, and that, when the system makes an error, that error has not propagated a hundred times before someone detects it.

Organizations that are managing this most solidly are not the ones that deployed the most agents first. They are the ones that invested time in mapping which decisions could be autonomous and which required human intervention before automating them, not after. The distinction sounds obvious when written like this. In practice, under the pressure of adoption cycles and public commitments to digital transformation, that distinction is systematically postponed.

Levie is not arguing against AI agents. He is pointing out that the promise of cognitive liberation assumes a model of work that does not align with how organizations operate with real responsibilities. Cognitive work does not disappear when execution is automated; it migrates up the decision-making chain, where the consequences of getting it wrong are greater and the timelines for correcting are shorter.

The Real Work that Companies Are Hiring For

The success or failure of AI agent strategies over the next two years will not depend on how many agents a company can deploy or on the technical sophistication of its architecture. It will depend on whether executive teams understood in time that what their organizations needed was not task automation, but the ability to make coherent decisions at a higher speed and with less internal friction.

This is the deeper need behind the massive adoption of agents. It is not operational efficiency. It is decision velocity with control. And that problem cannot be solved by any agent alone. It requires an organizational architecture that knows what to delegate, what to retain, and who is accountable when the autonomous system takes the wrong path.

Companies that hire agents to solve the first problem—efficiency—and neglect the second—governance of distributed decisions—will discover that they have scaled their capacity to make mistakes before scaling their ability to correct them.

Share
0 votes
Vote for this article!

Comments

...

You might also like