Edge computing can empower the supply side of public data openness and the government application side, while also playing important roles such as agent, scenario manager, and cutter in the process of openness. The introduction of this technology aligns with the underlying logic of long-term development in data utilization.However, the introduction of edge computing also presents certain technical risks, governance issues, and security concerns.Therefore, it is necessary to optimize the application of edge computing technology in the construction of public data operation platforms from both technical standards and legal norms.In terms of technical standards, efforts should be made to explore a comprehensive, precise, and authoritative standard system, introduce specialized artificial intelligence technologies to compensate for technical deficiencies, and clarify the discourse boundaries of edge technology itself.In terms of legal norms, emphasis should be placed on the organization and regulation of infrastructure, constructing a coordinated mechanism for data protection, and fully leveraging its governance effectiveness in the construction of public data development mechanisms.
1. Introduction of the ProblemEdge computing helps accelerate the digital transformation process and improve national governance levels. Edge computing refers to a new computing model that executes computations at the edge of the network. In edge computing, the downlink data represents cloud services, while the uplink data represents the Internet of Things services. The edge of edge computing refers to any computing and network resources between the data source and the cloud computing center. Edge computing and cloud computing are not opposing forces but rather complementary. Cloud computing, as a centralized large-scale data processing and storage platform, is suitable for handling large-scale data and complex computational tasks. In contrast, edge computing pushes computing and data processing capabilities closer to the data generation source to meet real-time, low-latency demands. It demonstrates its unique role and profound impact in public data development, providing multiple values such as improving data processing efficiency, enhancing data security and privacy protection, promoting data sharing and collaboration, optimizing resource allocation, and driving the development of intelligent services. Deploying computing resources at the network edge can effectively shorten the time chain from data generation to processing. Performing preliminary data processing near the data source helps reduce the large-scale transmission of raw data, effectively lowering the risk of data leakage. Edge computing also supports distributed architectures, facilitating data sharing and collaboration among public sectors, thereby establishing unified standards and technical specifications. As public data development research continues to advance, some scholars have pointed out that “public data refers to various data resources generated and collected by public institutions in the performance of their duties.” There are mismatches between supply and demand in public data development, conflicts between public and private purposes, and disputes between profit and non-profit motives. However, with the determination of policies such as public data authorized operation, corresponding research has gradually reached a consensus that its development should still focus on value realization, serving high-quality economic development and improving people’s livelihood needs. Based on this, how to promote the realization of public data value through “law + technology” methods in the context of new productive forces development has become a core issue. As the digital transformation process deepens, public sectors face an increasingly large volume of data, and the advantages of introducing edge computing into public data development are evident, effectively processing large-scale data to meet public data processing needs. The integration of “cloud computing + edge computing” to achieve “cloud-edge-end integration” is of great significance for application scenarios with high real-time requirements, such as intelligent traffic management and rapid response to emergencies. This not only directly relates to the maintenance of public safety and the improvement of service efficiency but also effectively alleviates network burdens. For urban infrastructure filled with sensors and IoT devices, even large datasets can be efficiently filtered and compressed before transmission. Especially in sensitive areas such as healthcare and personal privacy, preliminary data processing locally followed by anonymization or aggregation ensures the safety of original data even in the event of data leakage during transmission. For critical data related to national security, local processing better aligns with security policy requirements, enhancing data security barriers. Taking smart city projects as an example, multiple public sectors can participate in data collection and analysis through their respective edge nodes, effectively avoiding system-wide paralysis caused by single points of failure, thereby enhancing system robustness and collaborative efficiency. However, the introduction of edge computing technology also presents potential risks and challenges. Edge computing employs distributed and decentralized data storage and computing methods, involving multiple governance entities in management and application. The decentralization of data power can easily lead to phenomena such as responsibility shirking and conflicts of interest among various governance entities. Therefore, in the process of introducing edge computing technology into public data development, it is essential to fully consider data governance, clarify responsibilities, and establish cooperation mechanisms to ensure the effective application of technology and the security and reliability of data resources.2. Mechanisms and Effects of Edge Computing in Public Data Development
The process of public data development is flexible and variable, relying on cloud computing infrastructure to promote the degree of data openness and sharing. However, as the process deepens, the construction of platforms continuously pushes the optimization of data processing workflows, the improvement of data processing speed, and the grasp of data real-time and accuracy requirements, thereby driving the country to promote technological innovation. Edge computing, as a new data processing method, can promote the smooth operation of public data development platforms in terms of technical requirements and efficiency improvement, meeting public service and governance needs. From practical cases, taking the Beijing smart traffic project as an example, deploying thousands of edge computing units has significantly improved traffic signal optimization efficiency and shortened accident response times. Based on federated learning, edge AI models can promote cross-departmental data collaborative analysis without exposing data privacy, providing decision support for emergency command systems, showcasing the unique advantages of edge computing in ensuring data security and promoting data sharing. In terms of data governance, the integration of edge computing with blockchain technology has constructed a multi-level data rights confirmation system, providing strong guarantees for data security and trustworthiness. The construction of intelligent computing infrastructure in Shanghai combines “cloud-edge-end + intelligence” industries, utilizing TEE (Trusted Execution Environment) technology to process sensitive data within edge devices without uploading it to the cloud, thus perfectly aligning with international data compliance requirements and enhancing the compliance and security of data governance. The “cloud-edge-end” three-level collaborative system constructed by edge computing endows public data centers with elastic expansion capabilities. Edge computing can also promote the intelligent upgrade of national governance. The Hangzhou City Brain integrates numerous edge nodes to establish real-time data models covering population heat, economic activity, and more, significantly improving the accuracy of public service resource allocation. The digital twin system based on edge computing can simulate urban operational states at minute-level granularity, providing scientific basis for urban management and decision-making, further enhancing the intelligence and precision of digital governance.
1. Empowering the Supply Side of Public Data DevelopmentWith the development of the socialist market economy, especially the rise of the digital economy, traditional economic operation mechanisms and people’s lifestyles have changed, and the digital expansion of social life boundaries has directly led to a widespread demand for public data.First is the demand for data exchange.Currently, at the national level, multiple documents have been issued in China requiring the strengthening of data resource planning, enhancing data resource management, and promoting data resource applications.Data openness has officially been included in the institutional arrangements for government transparency, becoming part of the responsibilities of the adjusted national government transparency leading group, and corresponding management systems for the sharing of government information resources have been introduced.Due to the vigorous development of the current market economy and the increasing complexity of social management, many social issues (such as cross-regional education resource allocation, cross-province crime governance activities, etc.) do not belong solely to the jurisdiction of a single public sector but fall within the jurisdiction of multiple public sectors.When public sectors handle a particular social affair, if they cannot communicate and coordinate with other public sectors in a timely manner and exchange relevant data, their understanding of the issues will be one-sided, making it impossible to take the most effective response measures.Second is the demand for data sharing.Beyond the need for data exchange, public sectors should further broaden the scope and depth of data sharing.The sharing dimension includes two aspects.On one hand, it involves sharing data resources among public sectors.China is vast, and there may be data barriers between public sectors in different regions, with differences in governance models exacerbating the formation of data silos.Sharing among public sectors can break down data barriers, achieve data integration and application, and smoothly realize cross-departmental and cross-regional collaborative governance.Finally, there is the demand for co-governance.National governance cannot succeed through the isolated efforts of individual departments; it requires cross-departmental and multi-entity collaborative governance.Different public sectors and institutions typically bear their own responsibilities and tasks. To improve service efficiency and quality, ensure consistency and accuracy of information, and achieve seamless digital governance, it is necessary to establish mechanisms for cross-departmental collaboration and data sharing.However, there is a common gap between supply and demand in the process of public data development, primarily stemming from insufficient information, value misalignment, and uncontrollable costs, which can be overcome by leveraging the advantages of edge computing.First, the issue of “insufficient information” is primarily reflected in insufficient data supply. Although many regions have actively constructed public data development platforms, the data related to core business operations and the data urgently needed by the public often have limited openness, resulting in a situation of “formal openness” but substantial information deficiency. The data that has been opened often affects users’ trust and willingness to use due to issues of accuracy, completeness, and update frequency. Edge computing, as a cutting-edge technological paradigm, provides a new perspective and path to address the issue of insufficient information in public data development and utilization by deploying computing capabilities at the network edge, near the source of data generation, enabling rapid data collection, preliminary processing, and immediate response.In terms of improving data supply efficiency, its proximity to data processing capabilities can accelerate the data collection process and reduce transmission delays.For example, in smart cities, edge nodes can preprocess the massive video streams generated by cameras, extracting key events for upload to the cloud, thereby enhancing the real-time value of data while alleviating network burdens.Second, regarding the issue of “uncontrollable value,” the phenomenon of “goods not matching the specifications” is common, where public data may have issues such as incompleteness, inaccuracy, or disorganized formats, acting as stumbling blocks that severely hinder effective utilization and in-depth mining of data. The situation of “content not matching the topic” is even more challenging, as public data, as a “by-product” of administrative management processes, is not originally designed for specific application scenarios. In terms of applicability, granularity, and timeliness, it often deviates significantly from the needs of data users, making it difficult to meet the diverse demands of the market. The deep integration of edge computing with artificial intelligence technology can more accurately filter out high-value data treasures and effectively eliminate erroneous data, improving data purity and accuracy, thereby enhancing data quality and its adaptability in specific scenarios. Edge computing connects public sectors with the data market, helping public sectors identify and confirm the potential value of data in specific fields and scenarios. Due to the existence of information barriers, many data supply departments often lack a clear understanding of the commercial value of the data assets they possess in the market. The introduction of edge computing technology equips public sectors with a “digging shovel,” allowing them to delve into these hidden values and timely feedback the actual market demands into the early planning stages of information projects, guiding relevant departments to pay more attention to the specificity and practicality of data collection when constructing information systems, thus increasing necessary data collection projects.
Third, regarding the issue of “uncontrollable costs,” for data utilization entities, the supply-demand matching process of public data often lacks necessary transparency and is accompanied by uncertainty, driving up transaction costs and making data acquisition increasingly difficult. Data management departments face enormous pressure from two-way matching. On one hand, facing a large number of dispersed data utilization entities, it is challenging to provide personalized and differentiated services; on the other hand, in the process of cooperation with data supply departments, a significant amount of effort is required to convert external demands into actionable internal task lists, exceeding the actual capacity of data management departments and potentially encroaching on their administrative functions. Edge computing, by processing data close to the network edge, enhances data transmission speed, reduces latency, and accelerates the response speed of both supply and demand sides, effectively alleviating the workload of cloud centers, enabling data utilization entities to quickly discover the required data and complete supply-demand matching. More importantly, edge computing, with its powerful data processing and analysis capabilities, can help data management departments more accurately identify potential data utilization entities, efficiently integrating real market demands into decision-making systems, thereby promoting the effective allocation and rational utilization of public resources.
2. Empowering the Government Application Side of Public Data DevelopmentIn response to the increasing number of high-concurrency scenarios, edge computing addresses this primarily in two ways.First, it alleviates the pressure on local government centralized processing of public data development while improving administrative efficiency. “As an emerging computing paradigm, edge computing shifts computing resources from cloud centers to servers at the network edge, providing computational support for connected terminal devices.” This operational mode places public data directly at the terminals of relevant departments within the same city for processing, rather than relying on specific public departments for centralized computation. The various social issues arising from economic and social development are increasing, and the public’s demand for services from public sectors is also continuously strengthening, leading to many instances where various government tasks are handled in high-concurrency states, creating significant pressure. This can easily lead to reduced administrative efficiency, resulting in long wait times for the public and slow problem resolution; furthermore, the accuracy of administrative tasks may decline, causing dissatisfaction among the public. The application of edge computing in public data development platforms can effectively respond to high-concurrency government scenarios, helping most citizens solve problems promptly through categorized approaches without waiting for cloud computing to process and provide feedback at the terminals.Second, by utilizing cloud-edge collaborative operations, it complements the advantages of processing high-concurrency government activities after public data development. The disadvantage of unidimensional edge computing is the lack of security in information storage, while centralized cloud computing increases the complexity of government data development and utilization, reducing administrative efficiency and failing to respond promptly to the most pressing issues of public concern. The collaborative work of cloud computing and edge computing becomes a more reasonable option in high-concurrency government work. In edge computing, local citizens’ personal information is stored at the information terminal, eliminating the need to fill out cumbersome paper forms when visiting administrative service halls, saving residents’ time and staff effort. Additionally, for some simple administrative applications, residents can even apply directly from home without traveling to the administrative hall. In cloud computing, issues that are most prominent and of greatest concern to the public are processed collectively, ensuring efficient responses to urgent social problems and reducing the waste of administrative resources on trivial matters.3. Realizing Multi-Dimensional Responsibilities in Public Data DevelopmentEdge computing plays multiple important roles in public data development, including that of an agent, scenario manager, and cutter.As an agent, edge computing MEC technology is at the forefront of the data processing flow, performing preliminary screening and efficient processing of raw data from various sensors and IoT devices, effectively alleviating network bandwidth pressure and reducing data transmission latency.Edge computing can quickly identify and respond to abnormal events in video streams, enabling instant alerts without the need to transmit data to the cloud for processing. The localized processing capability enhances personal privacy protection, effectively filtering or anonymizing sensitive information at the source, only uploading necessary processing results, thereby improving the overall system’s security and privacy.Edge computing also acts as a scenario manager, dynamically adjusting operational strategies based on specific environmental conditions and application scenario requirements, demonstrating high levels of intelligence and flexibility.In the field of environmental pollution monitoring, based on real-time data from air quality sensors and meteorological conditions, it automatically adjusts sampling frequencies or triggers early warning mechanisms, improving monitoring efficiency and optimizing resource allocation.Additionally, edge computing serves as a cutter, breaking down large-scale computing tasks originally centralized in the cloud into smaller tasks and dispatching them to edge nodes near the data source for execution, thereby enhancing overall system performance and robustness. Even if individual nodes fail, the normal operation of the entire system will not be affected.The distributed computing model provides more flexible service deployment and expansion capabilities, allowing different regions to customize exclusive solutions based on their characteristics without relying on a unified central control system, thus optimizing the provision of public services.Notably, edge computing MEC technology has gradually become an indispensable part of smart city infrastructure, deploying computing resources at the network edge to achieve low-latency, high-bandwidth service delivery, providing strong support for emerging applications such as smart streetlights and autonomous vehicles, thereby enhancing residents’ quality of life. However, the widespread application of edge computing also brings new challenges regarding data security and user privacy protection. Therefore, establishing and improving relevant legal and regulatory frameworks, as well as strengthening the formulation and implementation of technical standards, becomes crucial for promoting the healthy and sustainable development of edge computing.Edge computing offers improved data processing, storage, and service quality, suitable for future data infrastructure solutions. Mobile cloud computing faces challenges of high latency and low energy efficiency, which can be addressed through edge computing solutions. The National Data Bureau and other departments in China have issued opinions on promoting the development and utilization of enterprise data resources, clearly stating the need to promote cloud computing, edge computing, big data analysis, and other platform services, emphasizing the need for differentiated data security and compliance management measures based on different sensitivity levels of data and data processing scenarios, providing macro guidance for the application of edge computing. However, at the implementation level, further refinement and improvement of relevant details are still needed to promote effective policy implementation. Meanwhile, the “Overall Layout Plan for Digital China Construction” emphasizes the importance of aggregating and utilizing public data, clarifying that public data generated by party and government agencies and enterprises in the course of performing their duties or providing public services should strengthen aggregation, sharing, and open development, providing ample space and opportunities for the application of edge computing in public data development.Globally, edge computing is widely combined with data to solve practical problems. Navantia, a leading high-tech shipbuilding company, applies edge computing for digital remote assistance, data-driven real-time processing of 3D scans, and augmented reality computing in manufacturing. APM Terminals, the world’s largest provider of port, maritime, and land terminal public services, integrates edge computing with open public data to achieve cloud-edge-end integration for the geographic and virtual positioning of fixed objects. IE University in Spain combines public education data with edge computing to develop secure and reliable immersive virtual courses, allowing students to learn via streaming and their own devices.However, the speed of adopting edge computing varies across different countries and regions. Developed countries with mature infrastructure and technological advancements are leading this transformation, leveraging the advantages of edge computing to promote innovation and efficiency across various fields. Public sectors worldwide are actively promoting the development of edge computing by formulating supportive policies and regulations, facilitating the seamless integration of this technology across various fields, fostering innovation, and ensuring compliance with security and privacy standards. The EU’s General Data Protection Regulation (GDPR) aims to achieve data protection through privacy by design, while the US Federal Trade Commission (FTC) has established a series of regulations related to cloud and edge computing. Although some industry regulations have been formulated, the FTC’s definition of personal data varies by state, and the balance of privacy protection depends on the interests of service providers, necessitating further legal development to address the new challenges posed by edge computing.4. Multi-Dimensional Risks of Edge Computing in Public Data Development
The application of any technology is a double-edged sword, and the application of edge computing technology in public data development platforms also brings new technical risks. Only by fully understanding the characteristics of edge technology itself and considering the governance challenges in the construction of public data development platforms can we prevent potential issues at the outset of application. Public data development concerns data security and privacy protection issues, necessitating a thorough understanding of application risks and an objective discussion of the application of new technologies.
1. Technical Risks of Insufficient Performance in Edge Computing ApplicationsFirst, the massive amount of data is difficult to select and optimize.With the rapid growth of IoT devices, the volume of data generated in edge computing environments is exploding; however, not all of it needs to be processed or uploaded to the cloud.How to effectively filter out valuable data and optimize its processing has become an important challenge.Due to the limited computing power and storage space of edge devices, it may be difficult to execute complex algorithms to identify which data is the most informative.Even if preliminary data filtering can be achieved, redundant information may still exist during subsequent data transmission, leading to wasted bandwidth resources.Second, there are risks to data security. Although edge computing can reduce the risk of data leakage to some extent by minimizing data transmission to the cloud, its distributed architecture also introduces new security threats. The widespread and diverse physical locations of edge nodes increase the likelihood of unauthorized access. Since edge devices typically have limited computing power, implementing high-intensity security protocols may consume excessive resources, thereby affecting performance. If a particular edge node is attacked, the confidentiality, integrity, and availability of the entire system may be compromised.
Finally, the construction costs of data infrastructure are significant. Supporting large-scale edge computing applications involves substantial investment costs. In addition to purchasing hardware, expenses related to network connectivity, power supply, and long-term maintenance must also be considered. Particularly in remote areas or harsh environments, deploying edge nodes may further increase these costs. Due to the lack of standardized design specifications, compatibility issues may arise between solutions provided by different vendors, increasing initial integration difficulties and potentially raising costs for future upgrades and maintenance.
2. The Impact of Decentralization on Centralized GovernanceThe impact of edge computing on centralized governance stems from its unique computational logic.Although both edge computing and cloud computing are big data processing technologies developed in the digital age, there are essential differences in their computational logic.Cloud computing emphasizes centralized and unified processing of data, where data from various fields must first be aggregated into a centralized database, analyzed through big data techniques, and then feedback and applied to various fields.In contrast, edge computing is a data processing technology that unifies resources geographically or network-wise close to users, with the core idea being that data computation should be near the source of data generation and close to users.If cloud computing is likened to a brain collecting data and directing the body to respond, then edge computing resembles the body’s functions responding to danger locally without involving the brain.Thus, from the essence and methodological characteristics of edge computing, its impact on the differentiation of political power manifests in two aspects.On one hand, all digital information technologies led by capital or enterprises inevitably face the “algorithmic black box” problem, which adds technical barriers for public sectors and the public to understand data content and its computational logic. Edge computing technology, like cloud computing and other data processing methods, is fundamentally a digital computation technology created by humans in a digital context, reconstructing the real world digitally and providing instructions on how to behave based on relevant data through algorithmic computation. In traditional society, all things exist in the same time and space, and the operational rules of society are reflected through various norms.In traditional reality, public officials and ordinary citizens do not experience a “black box” in rule-making and operation in the same time and space. At the same time, making rules clear and understandable to the public is an inherent requirement of rule-based governance.However, in the digital world, due to the professionalism of algorithmic technology and the obscurity of its computation, although people can intuitively feel the normative guidelines provided by algorithms, it is difficult to glimpse the internal logic behind these guidelines.In the current context of digital transformation in China, public sectors and enterprises form a “two wings” of digital transformation.Both are a community of destiny in promoting digital transformation, while empowering each other through their respective advantages.Considering the professional characteristics of digital information technology, although public sectors play a leading and guiding role in digital transformation, they still rely on the professional capabilities and technological advantages of information technology enterprises in practical applications.Therefore, various information technology enterprises are direct contacts with big data content and direct creators of data computational logic, and the cognitive barriers posed by the algorithmic black box apply to both ordinary citizens and the social and national governance entities.The traditional binary structure of “state-society” is beginning to transition to a triadic model of “state-enterprise-society” co-construction and co-governance, where public sectors’ grasp and judgment of various social information data depend on the technological advantages of enterprises, while enterprises’ grasp of social data and the construction of their algorithmic logic determine whether public sectors can timely and accurately understand the development trends of various aspects of society.
On the other hand, the decentralized data processing characteristics of edge computing promote a multi-entity resource management situation, which to some extent differentiates centralized governance mechanisms. From the design purpose of edge computing, its emergence aims to address the inefficiencies brought by centralized processing methods like cloud computing in the context of the Internet of Things, achieving an efficient big data processing pattern where cloud computing and edge computing complement each other. Based on the methodological characteristics of edge computing, its computation relies on resource storage and computation at the network edge, which is geographically and network-wise closer to users. Edge computing utilizes users’ edge resources and processes data nearby. This methodological characteristic also determines that its digital resources are distributed along transmission paths and managed or controlled by different entities. The decentralized processing model adopted by edge computing emphasizes “nearby solutions,” weakening the government’s unified supervision and regulation of data. Meanwhile, network edge devices provided and guaranteed by commercial organizations can allow data to emerge according to their intentions through “preset paths,” making modifications to the decentralized and dispersed logic of edge computing more internalized and obscure compared to the operational logic of cloud computing. Considering the profit-oriented nature of enterprises, without reasonable regulation and balance of data power, it becomes increasingly challenging to ensure that enterprises achieve a balance between their interests and social interests in the construction of algorithmic logic.
3. Data Loss and Leakage Raise Concerns About Technical SecurityThe methodological characteristics of edge computing can also lead to data loss and leakage, primarily arising from technical disparities and the absence of standards.From the methodological characteristics of edge computing, its computational logic differs from the core processing model of cloud computing, emphasizing the computation and integration of user edge resources, with its computing devices being closer to the user side than cloud computing.This distributed structure determines that its transmission paths are at a higher risk of being compromised, leading to new security and data leakage issues.From the perspective of big data analysis logic, to obtain more valuable information, more time is needed to collect a larger volume of data to ensure the authenticity and reliability of data analysis.However, the analysis and value of big data often carry a sense of timeliness, just as we can monitor varying traffic volumes on the same road segment at different times through big data.Although edge computing can process and analyze data during the aggregation process, it also requires more computing nodes.However, the current computing resources of nodes are limited, and different categories of data analysis frameworks vary in computational capabilities, with many frameworks only suitable for limited-resource computing nodes. Existing data models may not adapt to the resource environment of edge computing.If big data analysis cannot be conducted in real-time, premature or delayed analysis will inevitably result in the loss of valuable information, which is a challenge that edge computing must navigate between the value of information and timeliness.From the perspective of government regulation of big data technology, the disparities in technological development among different enterprises and the lack of relevant regulatory measures can also lead to the risk of data loss. Cloud computing, as an earlier emerging big data technology, has rapidly developed and been applied in practical fields, with relevant cloud service forms emerging in governance, healthcare, education, and other areas, gaining favor across various fields. Meanwhile, relevant security measures related to cloud computing data have also been established. For example, the Central Cyberspace Affairs Commission issued the “Opinions on Strengthening Network Security Management of Cloud Computing Services by Party and Government Departments” in 2014, emphasizing the necessity and requirements for network security management of cloud computing services. The “Cloud Computing Service Security Assessment Measures” released in 2019 aims to enhance the security and controllability levels of party and government agencies and operators in cloud computing. Given the current state of development in digital information technology in China, the participation and development of enterprises and capital in technology are inevitable. Any capital or technology enterprise’s innovation must prioritize the interests of the nation and its people while adhering to relevant legal norms. Since edge computing typically involves relatively weaker computing systems, it cannot implement multiple layers and types of security measures like conventional computing systems. Some edge computing systems are directly applied to system control. Additionally, edge computing involves data processing and storage, and its application in data sharing concerns important issues of privacy and security. Current regulatory systems often cannot fully adapt to the development needs of edge computing. The security challenges and demands associated with the application of edge computing stem from both the technology itself and regulatory gaps. To address these security challenges and the characteristics of security demands, comprehensive and multi-dimensional security solutions must be proposed, providing an end-to-end, fully covered network security operation protection system to truly reflect the value of edge computing.
Therefore, in a context of uneven technological levels and a lack of unified national standards, edge computing not only cannot serve as a supplement to cloud computing in technical terms, leading to inaccurate data analysis, but also poses risks of data leakage. The vast digital resource information carried by big data platforms is highly attractive to potential attackers, and security is the greatest vulnerability in intelligence work in the big data era. Without security guarantees, big data is worse than having no big data at all. If relevant data is stolen, it poses a significant threat to the data security of individuals, society, and the nation.
5. Governance Solutions for Edge Computing in Public Data Development
Undeniably, the use of edge computing for data processing can significantly improve the efficiency of public data development, mitigate issues arising from traditional centralized information processing, and effectively protect sensitive information. However, it is also essential to recognize that while edge computing brings “data dividends,” its risks cannot be overlooked. To address the management anomalies in the construction of public data development platforms, it is necessary to start from three aspects: legal systems, technical management, and core values, to effectively constrain technology users and escape the risks of technological capital monopolizing and eroding political power, the unevenness of technological levels leading to data loss and leakage, and the interference of technological ethics undermining political ethics.
1. Integrating Artificial Intelligence to Compensate for Technical DeficienciesThe deep integration of artificial intelligence and edge computing can effectively compensate for the inherent deficiencies of edge computing, expanding its application potential and value.In the implementation process, it is necessary to design specialized artificial intelligence algorithm models to ensure the sustainability of technological development and maximize social benefits.Artificial intelligence can play a crucial role in resource optimization and intelligent scheduling.Due to the limitations of computing power and storage space, edge devices struggle to independently handle complex tasks, while the application of machine learning, especially reinforcement learning algorithms, enables edge devices to intelligently allocate and manage task resources, dynamically adjust the priority of computational tasks, and ensure timely processing of critical tasks while effectively reducing unnecessary task occupancy of limited resources.Designing specialized artificial intelligence can also predict future resource demands based on historical data, allowing for proactive planning to avoid service interruptions due to resource shortages, thereby enhancing system stability and reliability.
Public data development involves a large amount of sensitive information. While edge computing can reduce data transmission risks, its distributed nature increases the difficulty of security management. Deep learning and anomaly detection models provide new solutions for data encryption and network security. The development of more advanced encryption algorithms and behavior analysis systems can monitor network activities in real-time, promptly detect and effectively prevent potential security threats, ensuring data security during transmission and processing.One of the value goals of public data is to provide real-time data services to meet the immediate information needs of the public and enterprises.Faced with massive heterogeneous data sources, traditional edge computing processing models appear inadequate.The introduction of intelligent technologies, particularly the application of natural language processing and image recognition technologies, enables edge nodes to directly complete complex data analysis tasks, shortening response times.As the number of edge nodes increases, operational costs and management difficulties also rise. Algorithm-driven automated operation and maintenance solutions, such as predictive maintenance technologies, are expected to forecast equipment failures based on historical data and take preventive measures in advance.Self-repair mechanisms can automatically attempt repairs when equipment malfunctions, reducing the need for manual intervention, lowering operational costs, and improving system maintainability and availability.
2. Suppressing Data Power to Promote Governance EffectivenessThe technological background of big data and artificial intelligence has given rise to a new form of power—data power, which edge computing also relies on.On the surface, data power is primarily characterized by its strong technical nature, which can significantly assist public sectors in governance.First, it can more accurately identify social governance needs;Second, it can formulate and propose more scientific policy solutions;Third, it can conduct more comprehensive and effective supervision and evaluation of administrative execution and public sector performance.However, peeling back the layers reveals the underlying capital; data power also carries the risk of monopolization and becoming hegemonic from a capital perspective.Once capital gains absolute technological advantages, it may control society and reshape the relationship between capital and the state.The legal system is designed to guide social relationships and their operations.This guidance is provided through the establishment of norms, fundamentally backed by state coercive power, and is preemptive, with legal relationships and powers defined by legal systems.Therefore, the most fundamental way to balance data power is through the systematic establishment of legal frameworks.In the process of systematization, it is necessary to gradually adjust the scope of various powers involved in edge computing to achieve overall institutional balance.Currently, China’s legal regulations related to the governance of edge computing risks are not comprehensive. The state must intervene formally to coordinate the balance between data power and political power, thereby escaping the risks of technological capital monopolizing and eroding political power.First, it is essential to regulate data collection and usage. Looking abroad, the legislative development of data power has been ongoing for nearly 50 years, with decentralized or specialized legislation addressing information privacy protection, data security protection, and platform behavior norms, constructing institutional norms for data openness and governance. In terms of edge computing, the dual attributes of data dividends and monopolistic tendencies necessitate a legal regulatory system constructed through a combination of “general legislation + specialized legislation.” It is crucial to recognize that the monopolization of data power from a capital perspective arises from two main reasons. On one hand, breakthroughs and advancements in technology require substantial capital investment. In other words, whoever controls capital has the potential to grasp technological progress to some extent. On the other hand, current laws and regulations, as well as relevant policies, are still incomplete, failing to guide capital and thus lacking corresponding guarantees for data security. Therefore, it is necessary to formulate and issue a “Data Development Promotion Law” to clarify and regulate aspects such as the basic rights of data subjects, the scope and procedures for data collection, and the allocation of responsibilities for adverse consequences, ensuring that data across society is used legally and reasonably, not merely as a tool for capital profit, but also to realize its significant value under technological support.
Second, it is essential to enhance the level of data openness and sharing between public sectors and between public and private entities. Currently, data usage is permeating all aspects of society. While public sectors possess vast amounts of data, the differing storage methods and formats across departments hinder effective integration. In contrast, enterprises are willing to invest substantial human, financial, and material resources to achieve the significant efficiency advantages brought by data integration in pursuit of commercial interests. Today, a few oligopolistic enterprises possess data volumes comparable to public sectors and are also at the forefront of technology. Therefore, changing the data power dominance under capital monopolization requires public sectors to gradually open up data sharing.First, it is necessary to accelerate the construction of a national big data center, establish unified public data collection standards and storage formats, and achieve cross-departmental, cross-regional, and cross-level sharing of domestic data resources.Second, it is essential to emphasize the concept of “global integration” and strengthen cooperation with other countries.In other words, a unified global system for data openness should be realized. In this process, on one hand, it is necessary to promote the free flow of data, while on the other hand, it is crucial to safeguard the security of each country’s data sovereignty and establish regulations for cross-border data flow.
3. Standardizing Data Desensitization to Achieve Technical SecurityWith the widespread adoption of big data and artificial intelligence in social life, the intelligence of social governance is also gradually becoming prevalent.In this context, not only commercial data but also the volume of government data is increasing.These stock data generated by social individuals and concerning social life contain a large amount of privacy/sensitive data.Moreover, during the global circulation of data, a significant amount of secret/sensitive data will also be generated based on national sovereignty.Due to the existence of “sensitive data,” each edge computing node needs to preprocess data according to unified standards, ensuring that only data conforming to these standards can enter the storage and computing stages of edge computing nodes, achieving “desensitization” of political privacy.Data desensitization is such a process, which, while retaining the business and technical value of the original data, involves desensitizing and concealing sensitive information.Specifically, data desensitization will choose different methods based on different usage needs, achieving varying degrees of desensitization.This technology can address the pain points of data sensitivity to some extent, enabling normal use and circulation of data while also protecting sensitive data.Setting standardized desensitization standards can be approached from the following two aspects.First, unify the desensitization scope, allowing for intercommunication and sharing across various fields. The opening, collection, and intercommunication of public data with enterprises and internationally require defining the scope of openness. Currently, China’s legal provisions regarding information security are relatively scattered, primarily focusing on network security, data security, and personal information protection. Overall, current laws still primarily address the protection obligations of public sectors regarding relevant information, lacking in public data openness. Particularly, key issues such as data classification and data desensitization remain unaddressed. Additionally, due to the lack of macro guidance from laws, local public sectors may encounter issues of inconsistency, lack of coordination, or even legal non-compliance during the specific implementation of public data development. Therefore, it is necessary to determine the scope of sensitive public data. Specifically, it is essential to establish a unified data openness directory, clearly defining the subjects of openness, such as the names, addresses, and contact information of government departments and public institutions, as well as the types of open data and the industries to which they belong. Second, it is crucial to clarify the rules for openness, ensuring that subjects of sensitive data are informed and consented, limiting the purposes of sensitive data openness, and regulating the procedures for sensitive data openness. Third, it is necessary to establish unified data desensitization standards and requirements, which can ensure that information is fully utilized while also protecting sensitive content from being leaked. In this process, it is essential to achieve the concretization and unification of concepts and terminology. The mixing of concepts and the ambiguity of boundaries are common issues in practice, particularly in public data development, leading to unclear applicability of legal norms, where data that should be open is not opened, and data that should not be open lacks corresponding protection. This may also lead to different regional standards, resulting in the same type of data being inaccessible in one place while easily obtainable in another.
Second, establish universal standards for appropriate protection throughout the data lifecycle. Currently, the security focus of information departments primarily lies in the security of information systems against external threats and terminal protection. Given that massive amounts of data are exchanged and intercommunicated every moment, it is crucial to recognize that sensitive data faces risks of leakage at various stages of its entire lifecycle. Meanwhile, the importance of ensuring the security of sensitive data throughout its lifecycle is increasingly prominent. Implementing data desensitization should adhere to the following basic standards: first, desensitization algorithms must be irreversible. The purpose of data desensitization is to prevent the leakage of sensitive data, so desensitization algorithms must be irreversible. Only in this way can data users be prevented from inferring and reconstructing original data from non-sensitive data. Second, desensitized data must still retain its data value. The fundamental purpose of data desensitization is to leverage data’s value in the application process; desensitization is merely a means, not an end. If the emphasis is solely on protecting data, then the most effective and strongest protective measure would be to destroy the content and structure of the data, but this would also render the data’s value null. Desensitized data must, on one hand, conceal sensitive content while, on the other hand, ensure that data can be applied normally. Specifically, it is necessary to retain the original data’s format technically and preserve its business analysis value operationally. Third, references to desensitized data should maintain integrity. Original data may have relationships such as primary and foreign keys; during the desensitization process, associated data should also undergo the same desensitization operations; otherwise, sensitive information can still be obtained through associated information, rendering desensitization meaningless.
4. Emphasizing the Value of Rule of Law to Stabilize Social RelationshipsIn the rapidly advancing field of computer science, research on scientific ethics lags behind. As a new technology, edge computing must pay special attention to the risks of technological ethics interfering with and undermining political ethics.The Fourth Plenary Session of the 19th Central Committee of the Communist Party of China proposed: “Building a socialist rule of law system with Chinese characteristics and establishing a socialist rule of law state is an inherent requirement for upholding and developing socialism with Chinese characteristics.” With “rule of law” as the core value guiding the development direction of edge computing platforms, public sectors can fully utilize “data dividends” while rigorously preventing the risk of interference with political ethics.Specifically, first, the universality of data development requires guidance from the value of rule of law. Data openness is not merely a governance issue for a single enterprise, industry, or public institution; data management and application are closely related to the vast global social organization in which people currently find themselves, necessitating regulation by the rule of law. For enterprises, the collection and use of large amounts of commercial data bring commercial benefits but also significant legal risks. For public sectors, data from different regions and departments must be interconnected, and while promoting government reform, conflicts may arise between public power and private rights. For global society, while enhancing the convenience of free data interchange, it is also necessary to safeguard the basic sovereignty of each country. Second, the technical nature of data development requires guidance from the value of rule of law. Data development relies on rapidly advancing computer technology, and in the context where ethics require further research, the rapid development of new technologies will be accompanied by boundless expansion without boundaries, necessitating limitations imposed by the rule of law. Algorithms, as the underlying logic of data development, should be guided by the rule of law towards positive and upward directions. Only in this way can issues such as algorithmic discrimination and algorithmic monopolization be regulated from the source, promoting the healthy development of the entire platform and industry. Third, the complexity of data development requires guidance from the value of rule of law. Unlike traditional social governance, data development faces a larger social scale, richer social demands, and faster social changes, which also require guidance from the rule of law.ConclusionFrom 2018 to 2020, edge computing was ranked by the world’s largest IT consulting firm, Gartner, as one of the “Top Ten Strategic Technologies.” With the explosive growth of data communication, storage, and processing on the edge, public data development needs to rethink how to achieve more efficient collection, storage, and sharing of data. The introduction of edge computing into the construction of public data development platforms is an inevitable move in the context of increasing data volumes and is also an important measure to adapt to the wave of digitalization, improve administrative efficiency, and promote economic growth. However, the introduction of edge technology into the construction of public data development platforms may indeed trigger the differentiation of centralized governance mechanisms, data security and technological crises, and the weakening of political ethics. Therefore, it is necessary to implement legal regulations on the introduction of edge computing into public data development platforms from aspects such as balancing data power, standardizing data desensitization, and guiding values. In the future construction of public data development platforms, in addition to continuously improving the application of existing technologies such as cloud computing and blockchain, it is also essential to fully leverage the value of edge computing in promoting the multiplier effect of data elements.
Previous HighlightsJiang Ying | On the Criminal Legal Regulation of External Interference in Environmental Quality Monitoring SystemsShao Yi | The Debate on Decentralization and Weak Centralization of Jurisdiction in CyberspaceWang Lei | The Dynamic System Theory of Medical Disclosure ObligationsMa Guoyang | On the Multi-Dimensional Dynamic Model of the Framework of “Artificial Intelligence Law”Contents | “Eastern Law Studies” 2025 Issue 2He Qinhua | On the Influence of Chinese Legal Culture on East AsiaShanghai Law Society Official Websitehttp://www.sls.org.cn