The recent warning from Chris Hyams, former CEO of Indeed, carries an uncomfortable truth: the risk of artificial intelligence (AI) does not arise from the technology itself but from the people steering it. This is no minor rhetorical twist. In 2025, Hyams emphasized two coexisting, yet tense ideas: that AI doesn’t perform "complete jobs," but it can competently execute a substantial portion of the skills in most roles; and that the central challenge is implementing it responsibly, as its impact on employment, housing, education, health, and justice could exacerbate existing inequalities.
Now, the focus shifts from "how" to "who." This shift diagnoses a leadership and organizational design issue: in many companies, AI is being integrated as if it were just another software package, when in reality, it is a lever that alters criteria, incentives, and controls. If the governance system is weak, AI merely accelerates bad, opaque, or shortsighted decisions. Conversely, if the governance framework is robust, AI becomes a tool for increased productivity, improved service, and reduced friction.
When the Problem is the Driver, the Brake Isn't Technical
Hyams understands the labor market from the center of the board. Indeed operates precisely at the intersection where AI can create value or cause harm swiftly: matching people with opportunities. In May 2025, he presented a finding that is as useful as it is easily misinterpreted: “there is not a single job” where AI can perform “all required skills,” but in about “two-thirds” of jobs, “50% or more” of those skills are tasks that current generative AI can do “reasonably well, or very well.” This statement has operational teeth: companies are not facing a binary replacement but a significant slice of tasks that can change hands.A rushed C-Level executive turns that slice into a cost-cutting mandate. A serious C-Level leader transforms it into a redesign of work. The difference lies in governance: who defines which tasks are automated, by what criteria, within what limits, with what auditing, and who is held accountable when the system makes a mistake.
In January 2025, at Davos, Hyams also sketched the macroeconomic framework pushing for acceleration: “we are at the beginning of a race” between a shrinking workforce and potential productivity gains from AI. He even anticipated temporary compression: “30 years of change” could be squeezed into “three or four years.” When that is the pace, the greatest risk is not a hallucinating model; it is an organization taking shortcuts because the incentive board rewards speed over control.
The warning for 2026 fits a pattern I often observe in transformations: principles and committees are announced, yet daily execution becomes captured by quarterly urgencies. At that point, "accountability" becomes a document, while products and operations push deployments. Technology does not determine that trade-off; internal power structures do.
AI as Portfolio Tension, Not an IT Project
In large companies, AI adoption typically enters through two doors. The first is efficiency: automating support, content generation, assisting developers, and internal analytics. The second is product: new customer features, better recommendations, improved matching, and less friction. In both cases, the classic error is to manage it as an IT project with a delivery date and a standard financial KPI.Hyams’ insights on skills suggest something different: AI crosses the "current revenue engine" and "operational efficiency" simultaneously. And if done correctly, it opens space for "incubation" and "transformation" of capabilities. If done incorrectly, it merely cuts costs in the short term and degrades the system in the long term.
That’s why his shift in emphasis matters. When a leader states that the risk lies with those driving it, he indicates that the typical failure is not in the lab; it’s in the field: it is deployed without clarity of ownership, without decision traceability, and without a practical mechanism to halt the system when it causes harm. In sectors like employment, the notion of "harm" is not abstract: a poorly calibrated filter can exclude profiles, perpetuate historical biases, or create opacity that is difficult to audit.
Hyams has already addressed this point emphatically by describing responsible AI as









