When AI Answers 911, Biases Also Dial In

When AI Answers 911, Biases Also Dial In

Motorola Solutions just acquired a conversational AI startup for emergency dispatch centers. The unasked question is who designed those agents and whose voices are missing.

Isabel RíosIsabel RíosApril 10, 20267 min
Share

The Financial Logic Behind the Acquisition is Flawless. The Social Architecture, Less So.

On April 9, 2026, Motorola Solutions announced the acquisition of HyperYou, Inc., a startup specializing in conversational AI agents for emergency dispatch centers—known in the industry as PSAPs (Public Safety Answering Points). The move is not whimsical: dispatch centers in the United States operate, on average, at 75% of their staffing capacity, and more than two-thirds of calls they receive do not correspond to real emergencies. This volume of non-critical calls overwhelms human operators, delays urgent responses, and turns every shift into a marathon of institutional weariness.

Hyper's operational thesis is straightforward: deploy autonomous agents to manage non-critical calls, detect when a situation escalates—such as a broken-down car turning into a multi-car accident—and transfer the case to a human specialist in real time. Additionally, it includes real-time translation. Motorola, for its part, plans to integrate this capability within its Command Center portfolio and its AI suite called Assist, with human oversight controls explicitly incorporated into the design.

From a purely financial perspective, the argument is solid. Motorola's software and analytics segment historically generates higher margins than its hardware business. Automating the non-critical load of PSAPs without proportionally increasing the public payroll is exactly the type of value proposition that wins multi-year government contracts. Hyper's CEO and co-founder, Ben Sanders, put it bluntly: "When someone calls for help, there cannot be a delay". The government technology market has been searching for that kind of clichéd product for years.

So far, conventional analysis. Now comes the aspect that press releases systematically overlook.

An AI Agent for Emergencies Inherits the Biases of Its Trainers

Conversational AI systems do not emerge from a vacuum. They learn from historical data, which reflects human decisions made in specific contexts, by specific people, with specific blind spots. When the product in question decides if a 911 call is an emergency or not—and whether to transfer it to a human operator—the margins of error are not abstract. They have physical consequences.

Literature on algorithmic bias in emergency services is not speculative. Automated police prediction systems, automated triage, and resource dispatch have shown documented patterns of under-attending to communities less represented in training data. Calls from areas with a higher proportion of non-native English speakers, from historically underserved neighborhoods, or from people with communication difficulties tend to be misclassified by models trained on corpuses that do not appropriately represent them.

Motorola's press release mentions real-time translation as a capability of Hyper. It's a notable advance. But translating language does not resolve the underlying issue: a model primarily trained with emergency calls from urban, English-speaking middle-class areas will perform poorly under different communication patterns, even if the text is already in English. The diversity of training data does not equate to the diversity of language; it equates to the diversity of contexts, the ways of narrating a crisis, the cultural codes with which someone describes needing help.

Motorola emphasizes that its Assist Agents operate within parameters defined by each agency, with human oversight incorporated. That is an important governance control, but insufficient if the agencies defining those parameters also lack diversity in their own decision-making bodies. Human control exercised by a homogeneous team over a system designed by another homogeneous team does not multiply perspective; it doubles it.

The Invisible Cost of Design Teams Lacking Peripheral Input

Failures of scale in technology products rarely arise from technical errors. They come from unchallenged assumptions that no one in the design room had an incentive to question because they all shared the same mental map of the problem.

This is the structural fragility that I am interested in auditing in the acquisition of Hyper. Not the quality of the team—which seems competent based on the results obtained before being acquired—but the architecture of the networks from which they built their product. The social capital of a startup is not just its network of investors or pilot customers. It is the breadth of perspectives circulating in their design iterations. A dense yet homogeneous network—a group of people who know each other, think similarly, and mutually validate their assumptions—produces a product with a partial coverage of the problem it claims to solve.

In this case, the problem it claims to solve is universal: anyone, under any condition, in any community can call 911. The user base is radically heterogeneous. According to all available indicators in the public safety AI sector in the United States, the design teams are not.

Industry data are consistent: public safety tech companies with greater diversity in product teams have higher adoption rates in jurisdictions with diverse populations, not for ideological reasons, but because their models fail less on the fringes of the problem. Networks that incorporate peripheral nodes—rural PSAP operators, bilingual dispatchers, emergency coordinators in indigenous communities—produce more robust training sets and more comprehensive product specifications. This directly translates to fewer misclassified calls and harder-to-lose contracts against competition.

Homogeneity in AI design teams is not an ethical issue managed by the communications department. It is an engineering defect that is paid for in the market.

Motorola Solutions now has the installed capacity to correct this before the product faces its first documented failure in an underrepresented community. They also have the incentives: contracts with local governments in the United States are subject to increasingly stringent public service equity regulations. An AI dispatch system that shows patterns of differential underattending will not survive a federal audit, regardless of how precise it is on average.

The Mandate That Motorola Has Yet to Issue

Integrating Hyper into Motorola's portfolio represents a long-term bet on automating the public first response line. The technical infrastructure appears solid. The business model makes sense. The risk is not in the technology; it is in the composition of the networks that will decide how this system is trained, calibrated, and audited in the coming years.

The scale of deployment that Motorola has in mind—PSAPs across the entire United States—means that failures will not be local. They will be systemic, replicated from jurisdiction to jurisdiction until some incident makes them irreversibly visible. This is not a catastrophic scenario; it is the documented history of every AI system that has been deployed at scale without diversity in the design team.

The smart strategic move is not to add an ethics committee after launch. It is to incorporate now, in the process of integrating Hyper, the people who operate on the periphery of the system: dispatchers working in PSAPs with chronic staffing shortages, emergency coordinators in rural or immigrant communities, operators handling calls under degraded communication conditions. Not as symbolic consultants. As nodes with real power over design decisions.

C-Level leaders who will sign contracts with Motorola Solutions in the coming months should demand transparency about the composition of those teams before deploying the system. And Motorola leaders should look at their own board table at the next board meeting: if everyone who decides how this system is built comes from the same type of background, the same type of institution, and the same type of network, they inevitably share the same blind spots about whom the system serves and whom it fails, making them the first responsible when failures occur.

Share
0 votes
Vote for this article!

Comments

...

You might also like