Barcelona and Singapore appeared in the same press release, but the relevant factor was not geography. It was the strategic signal. On March 2, 2026, SynaXG announced benchmarks of a fully software-defined radio access network running on NVIDIA AI Aerial, executing 5G in Frequency Range 1 (FR1), 5G in Frequency Range 2 (FR2), and AI workloads simultaneously, with real-time GPU orchestration guided by policies. In terms of industrial promise, this aims to bridge the gap between two worlds that have historically viewed each other with skepticism: deterministic telecommunications and elastic computing.
The numbers they chose to publish are a statement of intent: on a single NVIDIA GH200 platform, the system operated 20 5G NR cells of 100 MHz, exceeding 36 Gbps of aggregated throughput, with sub-10 ms latency and support for up to 1,200 connected devices per cell. They also report an operator-grade virtualized RAN implementation in FR2 running concurrently with FR1 and AI workloads on shared GPUs, achieving end-to-end latencies as low as 5 ms. And to alleviate the operational anxiety of any telecom operator, they emphasize a detail that isn’t minor marketing: continuous operation 24x7 under sustained load.
This announcement is framed by a chorus of alliances and demonstrations at Mobile World Congress 2026: integration with Eridan’s FR1 radios, marketing collaboration with LITEON, and an integration role for Supermicro. SynaXG also positions itself within the work of the AI-RAN Alliance, contributing to AI-native architectures. The quick read is “another demo.” The executive read is different: a RAN that behaves like reconfigurable software, sharing GPU resources with AI, begins to erode the value of proprietary hardware.
The Test Is Not FR1 or FR2: It’s Coexistence Without Degradation
The most interesting part of the announcement is not that 5G runs on GPUs. That was already the technical horizon. The difference is the simultaneity: FR1, FR2, and AI workloads on the same computational substrate, without key performance indicators collapsing. RAN is a system obsessed with predictability, and for good reason: a millisecond here is not a detail; it’s user experience, radio planning, and effective capacity.SynaXG claims it achieves operator-grade performance in FR1 on a single GH200, while also executing virtualized FR2 alongside AI. In business language, this means eliminating the old conflict of dedicated infrastructure: one cluster for RAN, another for inference, another for analytics, all with usage peaks and valleys. The real-time, policy-guided GPU orchestration suggests a mechanism for reallocating compute cycles on demand, something that in mobile networks has always been the dream and almost never the reality.
There’s a crucial nuance: the press release mentions maintaining throughput, latency, and stability. Stability is the word that separates “brilliant demo” from “operational revenue-generating service.” The explicit mention of operation 24x7 under sustained load indicates that the conversation is no longer solely about technological viability but about operational reliability. The transition from FR1 to FR2 also weighs in: FR2 increases radio complexity, planning, and latency requirements, and the fact that they present it as the “first concurrent operator-grade virtualization” alongside AI is a bet to break the prejudice that millimeter waves and virtualization cannot coexist.
At the same time, NVIDIA frames the achievement as evidence that a software-defined architecture can offer cloud-like agility without sacrificing operator metrics, including performance per watt. There lies the economic nerve: if performance per watt holds, the argument shifts from futuristic to budgetary considerations.
The Economics Behind the Record: From Rigid Assets to Multi-Use Infrastructure
When a telecom operator buys traditional RAN, they purchase capacity in the form of specialized hardware. It’s a rigid, amortizable asset, difficult to repurpose for other uses. The move toward a software-defined RAN in accelerated infrastructure hints at something discomforting for the status quo: transforming a historically “single-purpose” expense into a “multi-purpose” platform.With the published data, SynaXG attempts to demonstrate that a single GH200 can centralize CU and DU and activate 20 carriers, while also running AI. If that translates into real deployments, a new math emerges: the same computational investment can cover mobile demand and edge inference demand. There are no savings figures in the announcement, and they shouldn’t be fabricated, but the direction is clear: infrastructure consolidation and better utilization.
This also changes the type of risk. The classic risk of a network is overprovisioning and paying for idle capacity, or underprovisioning and degrading service. Computational elasticity, if truly deterministic in latency, mitigates that risk because it allows resources to be reallocated according to traffic patterns and AI loads. The word “policies” matters: it’s not anarchic elasticity; it’s resource reallocation governed by operational rules.
At the same time, a dependency emerges: if the RAN runs on a specific accelerated platform, the provider of that platform gains negotiating power. The announcement speaks of “write once, run anywhere” on CUDA platforms like GH200 and DGX Spark. Portability within the same technological family is useful, but it doesn’t equate to total independence. For the C-Level executives, the point is not to moralize about dependency but to manage it: contracts, contingency paths, and an architecture designed to avoid lock-in.
The New Power Map: Less Hardware Monopoly, More Software Control
For decades, the RAN was the territory where hardware ruled and software obeyed. This announcement pushes against that trend: value shifts towards the software-defined L1/L2/L3 stack and the ability to orchestrate GPU resources precisely. SynaXG positions itself as full-stack and ready for commercial deployment, while NVIDIA remains as the enabling platform with AI Aerial.This type of change typically destroys monopolies slowly at first and abruptly later. Initially, because it coexists with installed systems and because operators do not migrate out of enthusiasm but for guarantees. Abruptly later, when operational evidence accumulates and finance teams discover that the marginal cost of new functions falls: network optimization, real-time analytics, local inference for industrial use cases, all running where the network once “lived.”
The partner layer also tells a story: Eridan appears as a radio integrator with its “Ultra-Clean Signal” platform; LITEON as a marketing partner focused on low-latency analytics and inference; Supermicro for integration. It’s the typical anatomy of a reordering industry: hardware modularizes, computing standardizes, and differentiation shifts toward software and operation.
The organizational risk for traditional incumbents is not that the technology doesn’t work. It’s that it works well enough to reconfigure purchasing decisions. Once an operator believes they can run RAN and AI on the same foundation, the procurement process shifts from buying “boxes” to buying computational capacity, software licenses, and operational support. That’s where margins change and winners emerge.
Augmented Intelligence at the Edge: Efficiency with Operational Insight
This announcement is marketed as a convergence of RAN and AI, but the human and operational impact depends on how AI is leveraged. Running inference alongside the network can enhance planning, anomaly detection, energy optimization, and low-latency industrial experiences. It can also devolve into blind automation if the only goal is cost-cutting without redesigning processes and responsibilities.The positive signal is that the announcement insists on determinism, policies, and sustained operation. That suggests a closer approach to “operational assistance” rather than “autopilot.” In networks, the AI that adds value is the one that reduces the time between observation and action with traceability: why GPU was reassigned, which KPI was prioritized, what limits were respected. The edge is not just a place to run models; it’s a place to make decisions with immediate consequences.
In business terms, the most powerful case is the enterprise edge: factories, ports, logistics, industrial security. There, having 5G connectivity and low-latency analytics in the same spot reduces integration friction and simplifies service level agreements. The promise of “no trade-offs” on network performance while running AI becomes a commercial enabler because it reduces the customer’s argument that analytics will degrade connectivity.
At the same time, this model demands a new discipline: model governance, secure updates, regression testing on latency and stability, and teams that understand both radio and GPU. The real scarcity lies not in hardware, but in hybrid talent and the capability to operate these systems without improvisation.
The Market Direction Is Clear: Software That Demonetizes Rigidity
SynaXG claims it is ready for commercial deployment after demonstrating concurrent FR1, FR2, and AI workloads on NVIDIA AI Aerial infrastructure, with throughput and latency metrics compliant with operator operation and stability 24x7. This pushes the RAN market into a phase where specialized hardware starts losing its premium, and the differential repositions on software, orchestration, and operation.In terms of exponential dynamics, this category enters into the mature digitalization of RAN and moves towards disruption of the proprietary hardware model, with initial signs of demonetization of functions that previously required dedicated equipment. Technology must consolidate connectivity and intelligence to empower human judgment in operation and democratize access to advanced networks without reliance on rigid infrastructure.











