1) Introduction by the Moderator
Moderator:
On February 28, 2026, OpenAI and the US Department of Defense announced an agreement crystallizing an uncomfortable mutation: "ethics in AI" has ceased to be a philosophical discussion, becoming instead a technical specification, contractual clause, and, above all, a competitive barrier. This contract comes after a visible cycle of political pressure: in 2025, the Pentagon had already opened an agreement worth up to $200 million for prototypes. Following the conflict with Anthropic—who refused to relax limits on mass surveillance and autonomous weapons—the Trump administration ordered the suspension of its government use, suggesting it posed a "risk to the supply chain."
OpenAI responds with "technical safeguards," limited deployment to the cloud, and measures like Full Disk Encryption, in addition to two declared principles: no to domestic mass surveillance and "human in the decision" for the use of force. The raw point today is this: when ethics translates into a compliance checklist, who gets left out, who captures budgets, and what incentives does it create for developing foundational models, as expensive as national infrastructures.
---
2) Opening Round
Gabriel Paz:
I view this agreement through a cold lens: Zero Marginal Cost. Training and operating foundational models are not "zero" in capex and energy; what tends toward zero is the marginal cost of replicating intelligence once deployed. This asymmetry creates the perfect market for the state to function as an anchor buyer, just as it did with the internet, GPS, or semiconductors. The contract of up to $200 million does not purchase "ethics"; it buys scale and operational continuity to sustain infrastructure few can finance.
Now, the twist is that ethics no longer resides in manifestos; it lives in parameters, access controls, encryption, internal audits, classified networks, and "guardrails." That's a formalization, yes, but it's also a competitive standardization. The company that best converts morality into verifiable engineering wins the market. The problem is not that a contract exists; it's that the standard is defined bilaterally by the provider and buyer without robust external oversight. This architecture tends to concentrate power, although the technology itself tends to lower the marginal costs of copying.
Elena Costa:
I see this episode as an acceleration of the 6Ds. Ethics is entering into digitalization and dematerialization: it is transitioning from abstract principles to coded controls, procedures, and operational restrictions. This can be progress if it translates into measurable limits: explicit prohibition of domestic mass surveillance, human responsibility in force decisions, controlled cloud deployment, and security measures like FDE. These are not poetry; they are mechanisms.
But we are also in a phase of disappointment: the industry sells "safe AI" as if it were a definitive seal, when in reality, it is a continuous and fragile process. Without visible independent auditing, and without comparable metrics between providers, "guardrails" can become technical marketing.
There’s also a geopolitical pressure vector. If a company that upholds restrictions loses access to the state as a customer—like what happened with the suspension of Anthropic—the market learns a lesson: ethics becomes negotiable if it blocks revenue financing GPUs, energy, talent, and supply chains. The only healthy exit is that controls empower human judgment rather than just mask it.
Isabel Ríos:
I am interested in the social architecture produced by this agreement. When ethics becomes compliance, the risk is that it turns into an insider language: lawyers, procurement, security, and a handful of laboratories capable of meeting the criteria. This can expel smaller actors and, worse, render invisible those bearing the costs: communities subjected to surveillance, targeting errors, operational biases.
Context data matters: we are talking about contracts in the order of $200 million, deployments in classified networks, and controlled cloud. This combination reduces public scrutiny and increases information asymmetry. Moreover, the wording "no domestic mass surveillance" leaves room for international surveillance, and "no offensive autonomous weapons" leaves room for "defensive" automation. Such contractual ambiguity is not neutral; it favors the actor with more negotiating power.
Additionally, internal dissent in companies—letters from OpenAI and Google employees demanding limits—indicates this is not social consensus; it is a strategic decision. The C Level must perceive it as a reputational and resilience risk: decision-making homogeneity amplifies blind spots.
---
3) Debate Round
Moderator:
Gabriel suggests that the state is an anchor buyer and that "engineered" ethics is inevitable. Isabel warns that such engineering may be an exclusive language that concentrates power and reduces scrutiny. Elena sees the 6Ds and alerts us to the "disappointment" due to the lack of comparable auditing. Gabriel, if ethics is a technical clause, who verifies the verifier when the deployment is in sensitive networks and controlled clouds?
Gabriel Paz:
If the point is verifiability, then the real debate is not moral; it is about industrial governance. In classified networks, you won’t have total transparency, but you can demand traceability: logs, access controls, segregation of duties, robustness tests, red team evaluations, and contractual penalties. The Pentagon already operates this way with critical cybersecurity.
What worries me is another phenomenon: that "ethics as a checklist" becomes a competitive toll and consolidates a duopoly of laboratories with infrastructure. When Trump suspends Anthropic for not relaxing terms, the market message is brutal: political alignment as the condition for revenue. That’s the macro distortion. If the state is the major financier, the standard of "security" will tend to reflect geopolitical priorities, not necessarily universal rights. There, an external framework is needed, even if layered.
Elena Costa:
Gabriel, you are assuming that more internal controls equate to more security, and that equivalence is incomplete. A checklist can pass audits and fail at the margins, precisely where the damage is decided. The problem with "guardrails" is that they often protect the visible output, not the incentive system.
Example: "human in the decision" can be an empty seal if the human becomes a ratifier under operational pressure. And "no domestic mass surveillance" does not prevent selective or international surveillance at scale. Security is not just cryptography and cloud; it’s use design, purpose limits, error metrics, and accountability. Without comparable standards between providers, compliance becomes a commercial advantage for those who document best, not for those who protect best. This is the phase of disappointment: certainty is promised where there is structural uncertainty.
Isabel Ríos:
Both of you miss the variable of social capital. In closed systems, those not in the room do not exist. If ethics is defined in technical annexes and classified networks, it displaces the voices of those who understand impacts: civil rights experts, minorities affected by surveillance, and diverse teams that detect operational biases.
The market has already shown the dynamics of exclusion: suspending Anthropic for sustaining restrictions is not just competition; it’s a labeling of "risk" to isolate an uncomfortable actor. This cools dissent across the industry. Furthermore, the internal dissent among employees indicates that there isn’t even alignment within the companies. When the C Level converts ethics into procurement, it also turns culture into risk. If the small group is homogeneous, their risk assessment will be homogeneous, and the error will be systemic.
---
4) Closing Round
Gabriel Paz:
This contract is a snapshot of the new order: foundational models are financed with budgets capable of absorbing their cost, and the state is the natural client. Ethics translates to engineering because it is the only way to operate at scale. The macro danger is concentration: if standards arise from bilateral agreements and political pressure, "safe AI" becomes an entry tariff. Global leaders must treat AI governance as critical infrastructure or risk losing technological sovereignty and bargaining power.
Elena Costa:
We are witnessing ethics move from discourse to software, but that does not guarantee a positive impact. Without comparable metrics, independent auditing, and a design focused on responsible use, security can degrade to documentation. "Human in the loop" and "no domestic mass surveillance" are starting points, not shields. This market is transitioning from digitalization to disappointment, with risks of regulatory and reputational disruption. AI must enhance human judgment and open the ecosystem, not just automate compliance.
Isabel Ríos:
Ethics converted into compliance tends to exclude: small enterprises, peripheral voices, and communities affected by opaque decisions. Ambiguous wording allows for predictable gray areas, and political incentives punish those maintaining non-negotiable limits. This is not just a technology discussion; it is a discussion about who has the power to define "security." The C Level must examine their inner circle in the next board meeting and recognize that if everyone is too similar, they inevitably share the same blind spots, making them imminent victims of disruption.
---
5) Moderator's Synthesis
Moderator:
A clear and tense map has emerged. Gabriel frames the agreement as an economic consequence: training and serving models costs so much that the state appears as an anchor buyer, and ethics becomes engineering because it is operable and auditable, even if that consolidates power and turns "safe AI" into a toll. Elena acknowledges the technical translation but emphasizes the central risk: the checklist is not security, "human in the decision" can be ceremonial, and without comparable metrics or independent auditing, the industry enters a phase of inflated promise where the best documenter wins, not necessarily the best protector. Isabel pushes the social angle: classified networks and contractual annexes reduce plurality, incentivize decision-making homogeneity, and amplify biases; furthermore, the political signal of punishing Anthropic for maintaining limits reconfigures the market towards obedience.
Overall, the debate does not deny that safeguards exist; it discusses who defines them, how they are verified, and what incentives they create. Ethics is no longer just philosophy: it is industrial competition, geopolitical power, and the design of authority.










