1 Overview
Edge computing refers to the migration of IT resources (such as computing and storage) from traditional cloud data centers to the user side, reducing the physical distance between users and IT resources. This approach achieves lower data interaction latency and saves network traffic, thereby providing users with low-latency and highly stable IT solutions. Edge computing relies on edge data centers to accomplish this. Edge data centers and traditional cloud data centers complement each other. Scenarios that require low latency and bandwidth savings can adopt edge solutions, while traditional businesses that are not sensitive to latency and do not consume much bandwidth can still use traditional cloud data center solutions. Edge data centers and cloud data centers address different application needs and effectively complement each other.
2 Demand Analysis
2.1 Demand for 5G
From the late 1990s to the early 2000s, with the improvement of fixed-line bandwidth and technological advancements, various portal websites and microblogs began to emerge, leading to an increase in data volume. With the use of 3G and 4G, social apps like WeChat and short video apps like Douyin and Kuaishou became popular, marking the rapid development of mobile internet. During this phase, China’s data centers, especially cloud data centers, experienced significant growth, with projects launched nationwide.
In June 2019, the Ministry of Industry and Information Technology officially issued 5G commercial licenses. As of early February this year, the three major operators had opened approximately 156,000 5G base stations across the country. On March 6, China Mobile announced the centralized procurement of 5G Phase II wireless network main equipment for 2020, officially starting the procurement bidding for a total of 232,143 base stations across 28 provinces, autonomous regions, and municipalities. Recently, Wang Zhiqin, deputy director of the China Academy of Information and Communications Technology, predicted in an article titled “Accelerating 5G Network Construction to Ignite a New Engine for Digital Transformation” that “by the end of the year, more than 550,000 5G base stations will be opened nationwide.”
In the future, these 550,000 (a number that will continue to grow) 5G base stations will generate astronomical amounts of data. Relying solely on centralized storage and computing will place significant pressure on cloud data centers, especially for many application scenarios involving the Internet of Things (IoT), where data processing needs to be close to the data source. It is foreseeable that data computation and storage in the 5G era will adopt a combination of cloud data centers and edge data centers, with edge data centers becoming the ideal infrastructure for processing more latency-sensitive business.
2.2 Demand for IoT
The IoT industry continues to debate where data analysis and processing should best occur: at the edge, on the devices themselves, at local gateways, or in centralized clouds. Some IoT scenarios without strict latency requirements may be aggregated to the cloud. However, most IoT scenarios currently involve commercial, industrial, and transportation applications that are latency-sensitive and data-intensive, making them unsuitable for traditional cloud environments. For example, large mobile machinery and factory assembly lines, autonomous vehicles, aircraft maintenance, disaster recovery systems, or alerting technicians for preventive maintenance, etc. The former CEO of Intel mentioned in a speech that due to the installation of hundreds of sensors on vehicles, approximately 40TB of data is generated every 8 hours of driving, equivalent to the total data generated by 3,000 individuals during the same period. Given the need for low-latency processing, if such a large amount of data were sent back to a cloud data center for processing and then returned to the vehicle, the impact on safe driving would be unpredictable.
The most conservative predictions also indicate that over 100 billion devices will be connected to the internet in 20 years. These devices include mobile devices, wearable devices, home appliances, medical devices, industrial detectors, surveillance cameras, vehicles, and clothing, among others. The data they create and share will bring about a new information revolution in work and life. People will be able to leverage information from the IoT to deepen their understanding of the world and their own lives, enabling them to make more appropriate decisions. At the same time, connected devices will automate many tasks that currently require human labor, such as monitoring, management, and maintenance. The IoT connects a vast number of devices, increasing the digitization and automation of devices across different regions. With the deployment and development of 5G, whether to integrate with IoT has become a hot topic. However, regardless of the technology adopted, the massive amounts of data generated by IoT that require nearby processing is an established trend, creating huge demand for edge data centers.
2.3 Demand for CDN
Third-party CDN providers utilize a dedicated network composed of thousands of specialized edge servers and storage to cache frequently used content. According to a research report released by Cisco, by 2022, CDNs are expected to carry over 72% of internet traffic, significantly higher than the 56% in 2017. Currently, much of the CDN traffic operates on regional core networks. As metropolitan network capacity grows faster than core network capacity, CDNs are expected to handle more traffic closer to end users in the future. By 2022, it is estimated that one-third of traffic will be processed at the edge, up from 27% in 2017.
At present, CDN providers generally do not have their own data centers; they will increasingly rely on edge data centers to provide the growing space and power needed to meet their expanding business demands. Moreover, with the pursuit of broader access and better customer experiences, edge data center providers need to deploy in more second-, third-, and fourth-tier cities, which will present a significant opportunity for the future data center market outside first-tier cities.
2.4 Summary
The network has driven the development of applications, and applications have enabled more scenarios for the network. Whether it is 5G, IoT, or CDN, they all bring massive amounts of data that require low-latency processing at the edge. Undoubtedly, in the infrastructure sector, with the explosive growth of various demands, edge data centers will be pushed to the forefront.
3 Core Technology Analysis
3.1 Edge Hardware
Edge hardware primarily refers to a series of infrastructures such as edge universal servers, network devices, and cooling systems. The deployment of edge computing services is often closer to users, resulting in relatively smaller deployment spaces compared to traditional data centers. The conditions of the server rooms are also relatively poorer than those of traditional data centers. The scale of deployed services dynamically expands and contracts with user demand, which places new requirements on edge hardware, including but not limited to: high-density computing and storage capabilities, the ability to operate and maintain in smaller spaces, higher reliability (stable operation in harsh environments), and self-cooling capabilities.
In the Open Data Center Committee (ODCC), the three major operators have initiated the OTII project, which can be seen as an attempt at edge servers. Previously, the general approach was to adapt the environment of edge server rooms to existing standard server equipment. However, this process incurs significant costs to meet the load-bearing, power distribution, humidity, and temperature requirements for the servers. Therefore, the OTII project has taken a different approach, developing edge-specific server equipment to adapt to existing server rooms. Currently, OTII servers have seen limited deployment in China Telecom.
3.2 Edge-Cloud Collaboration
Edge data centers provide users with nearby edge computing services, but edge computing nodes are not data islands. Edge data centers need to interact with remote data centers based on different business needs to provide better service to customers through the interaction between edge and data centers.
The main role and value of edge-cloud collaboration are to achieve load balancing and intelligent scheduling of multiple edge data centers. Based on the resource utilization, traffic volume, and health status of different edge data centers, more global and efficient intelligent scheduling can be performed. Secondly, it provides global business integration capabilities to meet the application needs of rapidly moving business. For example, in vehicle networking applications, simply knowing the traffic information within the coverage area of one edge data center is far from sufficient; often, long-distance driving requires knowledge of traffic information along the entire route to plan better traffic routes. This requires comprehensive data information, necessitating data centers to summarize information from multiple edge nodes and perform centralized intelligent planning. Furthermore, it supports the backend implementation of AI and big data services. Both artificial intelligence and big data services require large amounts of data for computation, and valuable information extracted at the edge needs to be synchronized to data centers for deeper model training.
3.3 Edge-Edge Collaboration
In addition to needing data interaction with central nodes, data interaction between edges is also very necessary. Edge-edge collaboration effectively helps users achieve smooth transitions between different edges. For example, users utilizing edge computing services while moving at high speeds (in vehicles or high-speed trains) need to quickly synchronize status information between different edge data centers to ensure that their services are not affected by edge data center switching.
The network architecture between edge data centers will be a key technical aspect. For instance, in traditional CDN access methods, the CDN edge POP connects to the metropolitan area network, and the CDN backbone POP points, provincial CDN content centers, and provincial CDN management centers connect to the provincial capital node IDC, which in turn connects to the backbone network. In edge-edge collaboration scenarios, shorter path connections between CDNs are required, which poses higher demands for future network architecture design.
4 Outlook
Edge is a relative concept; the internet in relation to traditional telecom networks, cloud computing in relation to the internet, and private clouds in relation to public clouds can all be considered edge computing. In the absence of a clear definition of edge computing, there are various developmental possibilities for infrastructure. Undoubtedly, with the explosive growth in demand for racks, the future data center industry will require innovative technologies such as OTII and liquid cooling to improve energy efficiency and reduce energy consumption. It will also require lossless networking technologies to enhance the channel capacity within data centers and DCI to support more edge-edge collaboration scenarios in the future. The future edge market will exceed trillions, becoming an emerging market that rivals cloud computing, providing vast market space and new development opportunities for the entire data center industry.
——This article is excerpted from “Research on the Demand Analysis and Core Technologies of Edge Data Centers” and has been updated.Original text included in the “2019 National Edge Computing Academic Seminar Proceedings.”
Author’s Profile
Guo Liang, Senior Engineer, Deputy Director of the Data Center Research Department at the Cloud Computing and Big Data Research Institute of the China Academy of Information and Communications Technology, and Head of the New Technology and Testing Working Group of the Open Data Center Committee (ODCC). He mainly engages in technical research in the field of data centers, including networks, servers, etc.
Contact: [email protected]
He Baohong, PhD, Senior Engineer, Director of the Cloud Computing and Big Data Research Institute of the China Academy of Information and Communications Technology. Chairman of the Internet and Applications Working Committee (TC1) of the China Communications Standards Association and Honorary Chairman of the Open Data Center Committee (ODCC). He has over 20 years of research experience in internet-related technologies, standards, regulations, and strategies, with rich industry experience and two monographs, “The Gene of the Internet” and “Wind Direction.”
Contact: [email protected]
Reviewed by | Chen Li, Shan Shan
Edited by | Ling Xiao

Recommended Reading




Light UpLooking TogetherOvercoming Difficulties Together
