93% of AI Budget Goes to Technology, Not to People

93% of AI Budget Goes to Technology, Not to People

Companies are building high-power engines while forgetting the driver. Data from Deloitte, Wharton, and Harvard uncover a critical oversight in AI investments.

Sofía ValenzuelaSofía ValenzuelaMarch 30, 20267 min
Share

93% of AI Budget Goes to Technology, Not to People

There’s a kind of error in structural engineering that doesn’t show up on blueprints until the building is already standing. It’s called deferred load failure: the structure can support the initial weight, looks solid, but has ignored a critical variable that only manifests under real pressure. Companies investing in artificial intelligence (AI) today are making exactly that mistake, and the numbers document it with unsettling clarity.

According to data compiled by Deloitte, Wharton, and Harvard—gathered by Fortune—organizations allocate 93% of their AI budgets to technology and only the remaining 7% to the human factor: training, role redesign, change management, and adoption capacity. This is not just a quirk of corporate culture. It’s a capital allocation decision that is already generating negative returns on multiple fronts.

The dominant narrative around the risk of AI revolves around apocalyptic automation, massive job losses, or uncontrollable superintelligence. That story sells headlines but distracts from the failure already occurring within organizations: it’s not that AI is replacing people, it’s that companies are implementing systems that their own teams don’t know, can’t, or don’t want to use.

When the Engine Outruns the Chassis

A Formula 1 engine mounted on a sedan chassis doesn’t produce a faster vehicle. It produces an unmanageable one. That’s the architecture that most companies are building when they deploy AI tools without redesigning the human processes around them.

The 93/7 imbalance is not just a poor budget decision. It reveals a fundamentally incorrect business hypothesis: the idea that technological adoption is automatic once the tool is installed. Any systems engineer knows that integration between components is invariably the point of greatest friction—not the component itself. The interface between the new part and the pre-existing system is where projects collapse.

Organizations are buying the most expensive component—licenses, infrastructure, models, security layers—and underfunding the critical interface: the person who must operate that component productively within a real workflow. The observable result is predictable: high-capacity tools with low adoption rates, pilot projects that don’t scale, and executives reporting frustration with the return on their AI investments without being able to diagnose exactly why.

This is not a technology crisis. It’s a systems integration crisis.

The Deferred Load No One Budgeted For

There’s a financial mechanism behind this imbalance that deserves to be audited coldly. When a company allocates an AI budget, the technological costs are visible, quantifiable, and easy to justify to a board: a contract with a vendor has a concrete number. Team training, process redesign, and organizational change management, on the other hand, produce deferred value and are difficult to attribute directly to a line on the balance sheet. CFOs approve what they can measure in the short term.

This budgetary logic creates a cost architecture with a clear structural failure: fixed expenses on technology accumulate from day one, while operational benefits—which depend on human teams adopting and operating the systems—come much later, if they come at all. The building consumes energy before anyone moves in, and no one has trained the tenants to use the heating.

The direct consequence is a unit economy that deteriorates before it improves. The cost per unit of installed capacity rises because effective utilization is low. And when utilization is low, the pressure falls on technology teams to justify the investment, which typically produces a counterproductive response: more tools, more layers of software, more technology spending. The cycle feeds into itself without addressing the right variable.

What the data from Deloitte, Wharton, and Harvard is pointing out is not a philosophical critique of technological capitalism. It is an audit of operational viability: the current investment model in AI has a structural bottleneck in the human component, and that bottleneck does not disappear with more technological investment.

The Component That Does Generate Measurable Returns

Organizations that are seeing concrete returns from their AI implementations share an architectural characteristic that the average market overlooks: they have treated human role redesign as a product investment, not as a human resources expense.

This has a precise operational implication. Investing in the human factor within an AI implementation does not mean offering an eight-hour course on how to use a new interface. It means redesigning the complete workflow—what decisions the machine makes, which are validated by the human, and which remain exclusively in the hands of the person—and then building the team’s capacity to operate within that redesigned flow. It is an exercise in organizational architecture, not training.

Companies that have executed that sequence correctly report something that others cannot show: AI amplifies the productivity of the operator instead of creating a parallel layer of work—managing the tool in addition to managing the original task. The difference between both scenarios is not in the algorithm. It’s whether someone redesigned the complete system before installing the new component.

This pattern also has a relevant commercial reading for companies that sell AI solutions to other organizations. The customer segment that generates the highest retention and least friction in adoption is not the one that purchased the most expensive license. It’s the one that contracted, in addition to the technology, structural support to integrate it. Software companies that have understood this have reconfigured their offerings: the product is not just the model; it’s the model plus the adoption process. This reconfiguration allows them to charge more, reduce churn rates, and generate recurring revenue from services that they previously gave away as support.

The Missing Piece in Every Boardroom Blueprint

The underlying diagnostic problem is that most organizations are evaluating their AI investments using the wrong indicators. They measure implementation speed, number of deployed tools, user coverage with access to the system. None of those metrics captures the variable that determines whether the investment generates value: the effective adoption rate with a measurable impact on productivity per unit.

A company that has deployed AI in 80% of its teams but records an active productive use of 20% does not have a strategic asset. It has underutilized infrastructure with full fixed costs. The 93/7 ratio in the budget is the source of that outcome, not a coincidence.

Organizations that want to correct that deferred load do not need a new AI strategy. They need to review the blueprints of the one they already have and find where they overlooked the operator. Business models do not collapse due to a deficit of ideas or a shortage of available technology; they collapse when the pieces of the system are not designed to work together and generate measurable value at every point of contact in the process.

Share
0 votes
Vote for this article!

Comments

...

You might also like