One of the core visions of 5G is edge computing, sometimes referred to as Mobile Edge Computing (MEC). The idea is to enable Mobile Network Operators (MNO) to deploy computing resources closer to the base stations, thereby achieving ultra-low latency and allowing them to charge higher fees for premium services.
Five years have passed, and MNOs have still not deployed edge computing on a significant scale. Only a few operators have attempted to collaborate with search engine companies to send data to their cloud centers instead of using their own computing resources.
The current mainstream model remains that telecom networks transmit data to “peering points” (commonly known as Internet Exchange Points), where multiple Internet Service Providers and networks connect to exchange traffic. From there, data is sent to massive data centers for processing, and responses are returned via the same path.
Why has the concept of edge computing failed to gain traction?
Edge computing promises to reduce latency by shortening the distance required for data processing. However, the speed of fiber optic data transmission is approximately 200km/ms. A data center located more than 100km away will only add 1ms to the response time.
The current latency of 5G networks typically ranges from 30 to 40ms, with private networks achieving latency of around 10ms at best. Data processing itself usually takes several milliseconds, especially for complex tasks like video compression. Reducing the response time by 1ms by moving computing closer to the device is not very significant.
Moreover, in most regions, there may be a large city with a massive data center already within 1ms range. These large facilities can provide better economies of scale, and massive data centers are also more competitive than Mobile Network Operators (MNO) in selling computing services. Therefore, today’s “edge” solutions largely follow the traditional model, where mobile network operators transmit traffic to peering points and then process it in massive data centers.
Private 5G networks are an exception, routing traffic to the IT systems of private network owners. While this technically meets the criteria for edge computing, functionally, it is just another form of peering interconnection, only connected to local IT networks.
Recently, discussions about MEC have quieted down as Mobile Network Operators (MNO) have realized that edge computing is neither a viable deployment service nor an attractive revenue opportunity. In fact, MNOs are increasingly centralizing their computing needs, consolidating baseband processing from multiple base stations into centralized units rather than deploying computing resources at the network edge.
The Future of Edge Computing: Can 6G Bring a Breakthrough?
Currently, it is unclear what exactly 6G will look like. Mobile operators (MNO) advocate for “pure software” updates to reduce operational costs, while manufacturers promote faster speeds and lower latency with “super 5G. Concepts like perception and AI native capabilities are also under discussion, but whether new spectrum will be allocated remains uncertain.
For edge computing to gain momentum, several conditions need to be met:
-
New applications requiring latency below 5ms
-
Willingness to pay a premium for such ultra-low latency services
-
Additional spectrum allocation to support low-latency air interfaces
-
Sufficiently widespread 6G deployment to enable edge computing across the region
Currently, all of these seem unlikely to be realized.
Most proposed 6G applications are merely reiterations of the promises made by 5G, many of which have yet to be fulfilled. Consumers and businesses show little interest in paying extra for 5G service fees, and acquiring additional 6G spectrum is becoming increasingly difficult. In fact, by region, the deployment rate of mid-band 5G (3.5GHz) is only 20%, which means the coverage of 6G will be even more limited.

Will AI and Sensing Technologies Change the Game?
Sensing and artificial intelligence are two new applications frequently discussed in the context of 6G. The market demand for sensing applications is still unclear, and implementing sensing in 6G may require high-frequency spectrum, which is not entirely suitable for communication needs.
Artificial intelligence applications typically require fast response times, which may justify the rationale for edge computing. However, most AI workloads either run directly on mobile devices (e.g., AI assistants, visual processing), or require high-performance processing best done in large data centers. Few AI applications require millisecond-level responses, and there is no strong market demand to pay extra for an additional 1ms acceleration through edge computing.
Some industry leaders suggest that mobile network operators sell their “idle” computing resources for AI workloads, leveraging unused capacity in their network baseband processing, but this idea is flawed. Massive data centers also have excess capacity during off-peak hours, making the computing resources of mobile network operators unnecessary. Additionally, dynamically reallocating computing power between AI workloads and wireless network functions is highly complex.
Therefore, transferring AI workloads to MNOs adds little value and is unlikely to succeed.
The Edge Remains in the Cloud
The telecom industry has long tried to compete with massive operators by offering value-added services, often with little success. Mobile Network Operators (MNO) have struggled to grow in areas dominated by massive operators and OTT service providers, and edge computing is no exception.
In recent years, it has become evident that the true “edge” remains concentrated in massive data centers located in major urban hubs, rather than at the network edge.
William Webb

William Webb is an IEEE Fellow, a director of the Motability Foundation, and the former head of the UK Communications Authority (Ofcom).
(Editor: Franklin)
