Follow “Electronic Engineering Magazine” and add the editor’s WeChat
Regional groups are now open, please send messages 【Shenzhen】【Shanghai】【Beijing】【Chengdu】【Xi’an】 to the public account
In the process of digital development, Edge Intelligence has become a key force driving transformation across various industries. As an innovative technology that integrates edge computing and artificial intelligence, combining the advantages of near-end processing and intelligent analysis, edge intelligence not only achieves low latency and high real-time decision-making but also reduces bandwidth requirements and enhances privacy security. It can interact with the physical world and is widely used in autonomous driving, smart manufacturing, smart cities, telemedicine, video surveillance, and many other fields. This article will deeply focus on the current status of edge intelligence development, technological breakthroughs, and implementation bottlenecks, discussing how to better empower the digital and intelligent transformation of various industries.
Why is Edge Intelligence receiving so much attention?
According to research by Market.US, the global edge intelligence market size is expected to grow from 19.1 billion USD in 2023 to over 1400 billion USD by 2032, with a compound annual growth rate (CAGR) close to 26%. Meanwhile, data from Precedence Research indicates that the edge computing market may reach 3.61 trillion USD by 2032, with a CAGR of 30.4%.
Although the numerical differences reflect different research institutions’ definitions of “edge” and the boundaries of general computing and AI computing, the common trend displayed is clear—the decentralization of computing power and scenario-based deployment have become irreversible trends, both showing a compound annual growth rate of around 30%.
“After years of innovative practice and thorough validation of high-performance, large-scale models in the cloud, we are now witnessing the transition of precise, efficient, and rapid AI models from the cloud to edge devices such as smartphones and cars,” noted Dennis Laudick, Vice President of Product Management at Imagination. He pointed out that the reasons enterprises and users choose to process AI tasks at the edge rather than in the cloud are varied—from protecting sensitive information such as biometric features, location data, and financial data, to ensuring reliable, immediate responses even without network connectivity.

Dennis Laudick, Vice President of Product Management at Imagination
At the same time, edge hardware is also breaking through performance bottlenecks, achieving the required AI computing power under strict constraints of device size and power consumption (most edge devices rely on battery power). According to predictions by Counterpoint Research, by 2028, 54% of mobile edge devices will have AI processing capabilities.
Silicon Labs believes that in recent years, edge intelligence, as a fusion technology of edge computing and artificial intelligence, can perform data processing and intelligent decision-making on edge devices, becoming an important means for many industries to pursue efficient intelligent operations, reduce data processing latency, decrease bandwidth consumption, and enhance privacy protection, bringing new development opportunities and transformations to various industries such as industrial, commercial, medical, and home applications.
Especially, edge intelligence can process and analyze data through the network edge, providing real-time responses to various business needs, offering strong technical support for the digital transformation of industries. For example, in the field of industrial intelligent manufacturing, edge intelligence can help enterprises achieve real-time monitoring of equipment status and fault prediction, improving production efficiency and product quality. In applications such as healthcare, which require real-time processing and rapid responses, edge intelligence can significantly reduce data transmission latency and improve data processing efficiency by shifting data processing tasks from the cloud to edge devices closer to the data source.
XMOS Asia Pacific Market and Sales Head, Mu Tao, stated that functionally, edge intelligence enables devices to process data locally, reducing data transmission latency and leakage risks, achieving low-latency, rapid response, and privacy-protecting edge AI processing; on the other hand, as an intelligent data interface for large intelligent computing centers or cloud networks, it organizes and preprocesses sensor data through AI and converts data formats after processing by the AI system to ensure reliable operation.
XMOS Asia Pacific Market and Sales Head, Mu Tao
Therefore, unlike the powerful GPU and NPU in well-equipped, costly intelligent computing centers, edge intelligence systems have strict requirements regarding power consumption, cost, and chip area on the board, making efficient processors or SoC products the first key factor for the success of edge intelligence.
Significant Differences in Implementation Popularity
Despite edge intelligence undergoing a qualitative change from “single-point breakthroughs” to “system reconstruction,” it is undeniable that there are still significant differences in the application popularity of edge intelligence across different countries and industries.
Dr. Qiu Xiaoxin, founder and chairman of Aixin Yuanzhi, further explained using assisted driving as an example. She pointed out that internationally, Europe and the United States lead in basic algorithm innovation and chip architecture, such as Tesla achieving edge training and deployment closed loops through self-developed chips. In contrast, Chinese automotive companies have advantages in scenario implementation and engineering innovation, quickly iterating mass production solutions through scenario closed loops. The deeper impact is that edge intelligence is reconstructing the industry value chain—manufacturing’s “detection-maintenance-optimization” full-process closed loop, and the paradigm shift in urban governance from “post-response” to “predictive intervention” essentially transforms data into productive factors through intelligence.

Dr. Qiu Xiaoxin, founder and chairman of Aixin Yuanzhi
Overall, China is currently transitioning from technology validation to large-scale application, and edge intelligence can better achieve the digitalization and intelligence of the physical world, thereby improving industry efficiency and further automating delivery of results. For example, assisted driving has already entered the standard configuration stage; edge intelligence can analyze real-time data at intersections, optimize traffic light settings, and reduce congestion rates. In the production field, edge intelligence can analyze equipment operation data in real-time, perform predictive maintenance, and provide early warnings for failures, reducing downtime. It can also conduct defect detection during production, shortening quality inspection cycles and reducing error rates.
Dr. Qiu Xiaoxin believes that the significant differences in the speed of edge intelligence adoption across different industries are due to varying core driving forces in terms of scenario demand rigidity, technological economy, and data sensitivity. “Frontier applications such as smart cities, assisted driving, home data centers, and embodied intelligence are clearly more suitable for the popularization of edge intelligence.”
For instance, household or industrial robots often require rapid responses, especially the latter, which may even need millisecond-level responses. Edge intelligence processes data directly at the device end, avoiding cloud delays; while factory production data (such as process parameters) is sensitive, edge computing can reduce external transmission risks; at the same time, high-frequency sensor data (such as vibration and temperature) can be processed locally, with the environment’s always on proactive perception analysis reducing bandwidth, cloud storage costs, and cloud service costs.
Dr. Qiu Xiaoxin cited the home data center as an example, which can serve as a “nerve center” integrating edge + embodied intelligence, coordinating service robots, smart appliances, and other terminals. For instance, the refrigerator’s edge AI identifies ingredient stock → the embodied robot autonomously places orders; the home monitoring edge system triggers an alarm → the monitoring robot goes to check, etc.
Correspondingly, compared to the advantages of low latency, privacy protection, and offline availability that edge intelligence possesses, certain industries are slower in advancing edge intelligence due to business characteristics, technical limitations, or economic factors, with cloud computing dominating. For example, in the financial industry, such as banking and insurance, data must be archived for long periods and subjected to regulatory scrutiny, with anti-fraud and risk assessment tasks relying on models with hundreds of billions of parameters, where the storage and computing power of edge intelligence still need to develop.
“But one thing must be clear: edge computing is not equivalent to cheap cloud computing,” Dr. Qiu Xiaoxin pointed out. “The edge side requires a completely new computing architecture that is ‘born at the edge and designed specifically for the edge.’” This trend is fully reflected in Aixin Yuanzhi’s technology strategy, with its self-developed Aixin Tongyuan NPU architecture defined as a processor specifically designed for the AI era that “natively supports edge intelligence.”
“The development speed of edge AI is rapid, especially in the past year, with significantly increased attention, but it is still in the market’s nascent stage,” said Peng Zunian, Director of Consumer, Computing, and Communications Business at Infineon Technologies Greater China. He believes that edge AI demonstrates capabilities in improving real-time response, saving power, and reducing dependence on networks, greatly enhancing the user experience of many products.

Peng Zunian, Director of Consumer, Computing, and Communications Business at Infineon Technologies Greater China
From a development trend perspective, edge AI is still relatively dependent on the development experience of large models, mainly miniaturizing large models through techniques such as pruning and distillation to run on edge master control chips. Therefore, applications such as visual and voice recognition have a faster acceptance rate, while those niche scenarios without existing models to reference require longer technical accumulation and development cycles. In the future, AI models will certainly be based on embedded scenarios, emphasizing efficiency and real-time performance, so model training for embedded scenarios is worth focusing on.
During the conversation, Dennis also mentioned the automotive industry. As one of the fields where edge intelligence technology is most widely applied, every car today is equipped with local AI systems at different levels—from simple camera images for distortion correction and stitching (for panoramic imaging functions) to more complex non-intervention autonomous driving technologies (achieved through environmental perception algorithms and path planning algorithms).
“The automotive industry can quickly embrace this technology not only because the AI application scenarios and business models in this field are clear (manufacturers can set higher premiums for vehicles equipped with more advanced ADAS features), but also because the large-capacity batteries in vehicles can provide sufficient power support for high-performance edge AI systems,” he said.
In contrast, markets such as smartphones are still exploring suitable business models for edge AI. Although some companies have achieved differentiated competition through AI image editing tools, the performance limitations of mobile devices require processing technologies to achieve higher energy efficiency— even running lightweight models must break through computing power bottlenecks.
Silicon Labs stated to “Electronic Engineering Magazine” that the degree of application popularity of edge intelligence technology varies across different industries, mainly influenced by factors such as technical adaptability, urgency of industry demand, and maturity of infrastructure. In their view, edge AI/ML will be more widely applied in commercial, industrial, and home settings, including sensor data processing (for anomaly detection), predictive maintenance, audio pattern recognition (for improving glass breakage detection), simple command word recognition, and visual applications (using low-resolution cameras for on-site detection or headcount statistics) and so on.
Mu Tao’s views are largely consistent with those of the interviewees mentioned above. He believes that the development of edge intelligence requires not only the pull of AI technology innovation but also the active exploration of traditional industries to use AI technology as a driving force for transformation and upgrading after achieving digitalization and networking. Therefore, many industries that have already gained significant benefits from digitalization often become important areas driving a new round of edge intelligence development, such as the rapidly growing Internet of Things sector, the automotive industry moving towards “new four modernizations,” and the digital consumer electronics sector.
For example, smart home devices, smartphones, laptops, tablets, wearable devices, smart speakers, smart cameras, and smart TVs with AI capabilities are rapidly growing. They not only integrate AI accelerators such as GPU and NPU into the main control SoC beyond traditional CPU, enabling them to run various AI models with low energy consumption, bringing new user experiences and device productivity. At the same time, they can process privacy information locally without needing to upload sensitive data to the cloud to protect user privacy.
In automotive and industrial applications, the developers of these systems are also collecting vast amounts of data by adding various sensors and environmental perception methods, using GPU or NPU to form high-value intelligent solutions from this data through various machine learning or other AI models and algorithms, achieving intelligent driving of vehicles or producing new products that humans cannot quickly discern and judge, and managing equipment intelligently.
“Large Models+ Edge Computing” Opens New Spaces
Dr. Qiu Xiaoxin lists hardware and algorithms as key focuses. On the one hand, hardware should focus on improving the efficiency of heterogeneous computing hardware in running AI algorithms and reducing power consumption, overall continuously lowering the landing costs of edge intelligence, allowing it to cover as many scenarios as possible, making AI more inclusive. On the other hand, software should focus on model compression technology, that is, how to better and more intelligently run the capabilities of cloud-side models in a smaller model and at a lower cost on the edge side.
Moreover, the emergence of the DeepSeek large model has brought dual opportunities for edge intelligence in terms of “capability decentralization” and “scenario adaptation.” Initially, DeepSeek was a benchmark for cloud computing, but like many foundational models, it has now launched a lightweight version that can adapt to edge devices—DeepSeek-VL2 series models perform well enough to compete with existing open-source dense and mixture of experts (MoE) models.
With the support of DeepSeek, the general cognitive abilities of large models through knowledge distillation, model compression, etc., not only achieve the popularization of AI intelligence but also empower the edge side to achieve more precise fine-grained perception and complex decision-making, stimulating a large demand for edge intelligence. Moreover, the DeepSeek MoE architecture also provides a new idea for reducing hardware costs for edge intelligence.
At the same time, this evolution also confirms that the field of AI can still achieve significant breakthroughs through software innovation: when the developer community gains hardware support with flexible architecture and strong performance, they can customize new models for specific market or device needs at an astonishing speed. The transformation of DeepSeek is a vivid example of this “algorithm-hardware co-evolution.” It is not an exaggeration to say that the technical paradigm of “large models + edge computing” is opening up new value spaces.
“As demonstrated by breakthrough technologies like DeepSeek, the current stage of artificial intelligence has made significant leaps in accuracy, performance, and energy efficiency, mainly stemming from innovations in the software domain.” Dennis believes this principle applies equally to edge computing and cloud computing— to widely popularize advanced algorithms in the market and bring their value to global users, the most critical mission of the technology ecosystem is to ensure that edge devices possess sufficient flexibility and programmability to support cutting-edge model technologies.
This means that edge hardware cannot solely rely on overly specialized processors like NPU to meet all AI needs. While such processors can efficiently run known algorithms and achieve high performance, they struggle to adapt to new model architectures. In contrast, GPU technology not only allows edge devices to continuously benefit from software innovation but also provides an ideal balance between performance and programmability for hardware designers and software developers, an advantage that will persist long-term.
However, Mu Tao believes that despite the specific technical paths and implementation methods, chips used for graphics processing and AI training, as well as those used for implementing AI inference on edge devices, or various SoC equipped with GPU or NPU, are currently the most common. However, for edge AI applications, in addition to sufficient computing power, considerations must also include media types, operating power consumption, real-time performance, and other system requirements. Therefore, edge AI chips with micro-architectures tailored for edge AI applications and corresponding development tools will emerge prominently in the rapid development of edge intelligence in the future.
For example, AI SoCs adopting “software-defined system design + micro-architecture AI acceleration” will be one of the key development directions in the future, as they can flexibly process various media information such as audio, images, and video across a wide range of scenarios through AI. In other words, platforms that are more flexible and can achieve specific performance through architectural innovation in edge AI chips will help the industry deploy AI functions in resource-constrained edge environments without being limited to relying on specific functional characteristics.
The development of edge intelligence relies on strong support from technologies such as the Internet of Things, 5G, artificial intelligence, and edge computing hardware. In future technology research and development directions, the first focus should be on the collaborative technology between edge intelligence and cloud computing. Edge intelligence is suitable for tasks that require rapid decision-making, low latency, or data privacy protection on devices. Cloud computing is more suitable for large-scale data storage, complex analysis, and model training, complementing edge intelligence, as higher-level processing that cannot be completed at the device level can be processed in the cloud and then sent back to edge devices.
Secondly, security and privacy protection technologies are becoming increasingly prominent as the application of edge intelligence continues to expand. Technologies such as data encryption transmission, device identity authentication, and privacy-preserving computing ensure the security of edge devices and data, making them important directions for future technology research and development.
Edge Intelligence, Moving Towards Open Source and Openness
Mu Tao expressed support for this. In his view, open source and openness are indeed one of the important means to promote the future development of edge intelligence. Faced with the diverse edge intelligence scenarios and applications, it is impossible for a few application development companies to develop all commercial models and tools alone; a large number of open-source models and algorithms are also needed to support the rapid landing of edge intelligence.
“This can support numerous enterprises and developers to quickly and with low investment start projects and ‘stand on the shoulders of giants’ to develop edge intelligence applications without posing a threat to privacy—deploying AI models at the edge means only sharing the necessary metadata, rather than raw data that could be misused in the cloud,” Mu Tao said.
Dr. Qiu Xiaoxin pointed out that open source and openness are both the core path for the development of edge intelligence and an important boost for its development, which highly aligns with Aixin Yuanzhi’s mission of “inclusive AI. In the practical implementation of edge intelligence, only by enhancing the capabilities of open-source foundational models can the capabilities of edge intelligence be rapidly improved. At the same time, through hardware-level security isolation, enterprises, families, and individuals can complete data closed loops in local hardware, effectively eliminating potential privacy data leakage issues, inherently possessing strong privacy protection characteristics.
Dennis added that the field of artificial intelligence will always present a pattern of coexistence between open-source models and proprietary models. A key direction that the technology industry urgently needs to collaboratively promote is to establish open standards to achieve the portability of AI software across different devices and hardware. There are already many practical examples, such as the open-source library LiteRT for running AI models on edge devices, and the heterogeneous computing standard oneAPI launched by the UXL Foundation.
While software is the core element in creating quality intelligent experiences, its rapidly rising development costs have become a concern in multiple markets. Promoting open standards for the portability of AI workloads will become the cornerstone for reducing software development costs and driving the large-scale landing of edge intelligence.
In terms of privacy and data security, edge intelligence has inherent advantages. Since edge AI runs on local devices, sensitive content such as health data and financial information does not need to be uploaded to the cloud. This architecture effectively avoids the uncertainty risks of cloud-side AI in privacy protection, building a security barrier for user data.
Peng Zunian further refined the applicable areas of open source and openness. He emphasized that open source and openness typically refer to the models themselves, not the data—whether it is the data used for model training or the local data during user usage, both need to be protected. Therefore, Infineon integrated security hardware supporting high-level information security certification into the design of the PSOC Edge AI MCU from the outset to protect data from being infringed and stolen, significantly reducing the risk of user privacy data leakage and misuse.
Can You Have Your Cake and Eat It Too?
As the application scenarios of edge intelligence continue to enrich, people hope to achieve cross-scenario data sharing and integration to extract greater value on one hand, while wanting to reduce the costs of edge intelligence technology applications across various industries on the other. How to effectively address these two issues has become a key concern.
“Dennis believes that these development opportunities are not mutually exclusive but can be advanced in parallel. For applications such as advertising and insurance, data sharing can help third parties gain insights into user behavior, thereby unlocking commercial value. Of course, sharing data with third parties is not a mandatory option—enterprises that prioritize data security can completely deploy high-performance AI algorithms on private clouds: such solutions can aggregate multi-terminal data within the enterprise, process it on private servers, and ultimately provide decision-makers with a unified data view.
Device-side data processing opens up another important market. In scenarios where internet connectivity is unstable, rapid responses are needed, or data must not leave the device (such as smartwatches), low-cost edge intelligence technology will promote the popularization of intelligent applications.
Of course, in some cases, the boundaries between cloud and edge may overlap—when AI tasks can be processed at both the edge and in the cloud, application developers need to weigh the user experience and cost-effectiveness brought by both. After all, cloud computing services are never free!
The rapid diffusion of artificial intelligence technology to the edge and endpoint is seen by Mu Tao as one of the very important trends for 2025 and beyond. “Decentralization of computing power” is just one aspect of this new development; at the edge or endpoint, there is a need for a balance between computing power, cost, and power consumption. More critically, embedded systems need to achieve deep integration of models, components, and scenarios while maintaining high flexibility, allowing more developers familiar with specific scenarios to meet market demands through optimized solutions.
“The core of these two aspects is to balance the relationship between data value density and deployment economy. Cross-scenario integration can make intelligent perception more sufficient, enhance data value density, and improve the ROI of edge intelligence deployment investments. Even if the individual cost of edge intelligence rises, the overall TCO will still decline. Thus, the two are not contradictory,” Dr. Qiu Xiaoxin further stated. “In the future, the industry standards for edge computing will no longer be dominated by peak computing power but will be defined by scenario performance indicators. Enterprises without deep scenario understanding will face challenges.”
Achieving cross-scenario data sharing and integration relies on wireless connectivity technologies. As a leading company in this field, Silicon Labs provides more stable and efficient connectivity solutions for edge devices, reducing energy consumption of edge devices, extending battery life, and lowering deployment and maintenance costs. For instance, in the medical field, through artificial intelligence software, data can be aggregated from static data sets such as medical information systems (such as electronic medical records) and then combined with real-time data to display physical conditions, achieving personalized medical care.
From “Scenario Adaptation” to “Capability Reconstruction”
Dr. Qiu Xiaoxin predicts that the next 3 to 5 years will be the time for large-scale landing of edge intelligence, which will undergo a qualitative change from “scenario adaptation” to “capability reconstruction.” With the gradual maturity of heterogeneous computing paradigms, the closed loop of “dynamic perception-decision” at the edge, and the evolution of privacy computing native architectures, edge computing will achieve disruptive large-scale landing in smart cities, intelligent driving, smart manufacturing, and smart furniture, “in fact, the development of edge intelligence will completely reconstruct the underlying logic of human-computer interaction, industrial efficiency, and social governance.”
It is foreseeable that edge intelligence will gradually break away from the positioning of being “subsidiary to cloud computing” and become the infrastructure for reconstructing the operating rules of the physical world. When urban pipeline networks, agricultural machinery, energy devices, and other entities gain autonomous evolution capabilities through edge nodes, the productivity revolution of human society will enter a new era driven by “environmental intelligence.”
“Deploying AI models on small systems like MCU is indeed a win-win approach. MCU requires few system resources and has strong real-time capabilities. As the NPU at the MCU level matures, local inference capabilities continue to improve, supporting AI computing power on small systems,” Peng Zunian believes. In the next 3 to 5 years, “edge AI is a fast lane for double-digit growth,” and more and more niche markets will emerge, especially in the field of AI combined with low power consumption, which is a “very promising direction.”
Dennis firmly believes that the design of edge hardware systems is about to enter an exciting new stage. It is well known that the future development of edge AI requires hardware solutions that are efficient, high-performance, and sufficiently flexible to adapt to future workloads. Edge systems that previously required graphics processing capabilities may no longer need to configure additional NPU chips, as some more powerful and smaller GPU can meet all needs.
Mu Tao believes that almost all markets that were once controlled by traditional digital chips are now transitioning to intelligentization, so the demand for intelligence and the storage, communication, and network chips it brings is the biggest growth driver in the global semiconductor market. For example, even in very traditional audio technology fields, adopting edge intelligence technology can achieve considerable market results.
Silicon Labs is currently focusing on developing low-power, small-size products with built-in machine learning capabilities to process data directly on devices. Especially considering that machine learning can be used in embedded systems with sensors, microphones, and cameras to analyze time series, audio patterns (including voice commands) and images (for object detection and fingerprint reading) and other data.
The benefits of this approach are evident. First, edge intelligence reduces latency, as processing data locally enables faster decision-making and real-time actions, which is particularly important for tasks requiring real-time responses or safety-critical applications. Even in the event of network issues or management restrictions, edge intelligence can ensure that local processing systems operate normally.
Secondly, since sensitive data is retained on the device, only inference results /metadata are transmitted. This reduces the risk of intrusion and hacking, thereby enhancing security and privacy and protecting intellectual property. Finally, by offloading processing from the cloud, edge intelligence itself has lower power consumption and extends the battery life of devices.
Conclusion
As IoT technology continues to mature, more and more devices can connect to the network and generate data that needs to be processed and analyzed in real-time at the edge, driving the development of edge intelligence. At the same time, AI models and AI accelerators provide stronger computing power for edge intelligence, supporting complex AI algorithms and large-scale data processing, making the application scenarios of edge intelligence gradually expand from industrial fields to consumer fields including smart homes, wearable devices, and smart healthcare, with a very broad prospect.
(Editor: Franklin)