Follow “Electronic Engineering Magazine” and add the editor’s WeChat
Regional groups are now open, please send messages 【Shenzhen】【Shanghai】【Beijing】【Chengdu】【Xi’an】 to the public account
EdgeAI is starting to scale
Currently, edgeAI has rapidly evolved from laboratory technology to the core engine of industrial transformation. According toGartner‘s prediction, by2026, 50% of global enterprises will deployAI capabilities at the edge; whileSTL Partners data shows that by2030, the edge computing market will exceed445 billion USD, with a compound annual growth rate of up to48%. Various signs indicate that this explosive growth is driven by the necessity of algorithm decentralization andAI localized processing.
In terms of algorithms, the recent development of generative large models has reopened the ceiling ofAI capabilities, expanding the previously limited scenarios of theCNN algorithm era to almost all scenarios, while models are also continuously miniaturizing, compressing model size while maintaining capabilities, increasing model knowledge density to enableAI models to be deployed at the edge.
For example, the recently releasedQwen3 has model capabilities that are partially comparable to those ofQwen2.5 72B. At the same time, the open-sourcing of algorithms, such asQwen andDeepSeek, has significantly lowered the barriers to acquiring edgeAI capabilities, accelerating the large-scale deployment of edgeAI.
Different industries have an urgent need forAI localized processing. In the finance and healthcare sectors, the sovereignty requirements of sensitive data and privacy protection necessitate that data be processed locally; in industrial systems and assisted driving industries, the real-time requirements for decision control do not allow for delays caused by remote data transmission; from a cost perspective, continuous local operation ofAI large models incurs significanttoken consumption, and high cloud service costs make it difficult for many enterprises to bear… These factors collectively driveAI to develop towards the edge.
EdgeAI digitizes the physical world, reconstructing the data value chain
The core value of edgeAI lies in digitizing the current physical world—real-time capturing dynamic information from the physical world through cameras, microphones, sensors, etc., and usingAI algorithms to convert it into structured digital information, thus enabling further intelligence, forming a closed loop of perception—analysis—decision—action. This process not only changes the way data is processed but also reconstructs the entire data value chain.
In industrial scenarios, edgeAI can achieve efficient predictive maintenance, reducing operational costs. For example, by usingAI models to analyze equipment operation data in real-time (vibration, pressure, temperature, etc.), it can convert this into fault warning parameters. EdgeAI can also achieve defect detection, significantly reducing false detection rates.
In smart traffic scenarios, using cameras+ edgeAI to identify traffic density can convert video streams into structured traffic data, dynamically adjusting traffic light timing to improve traffic efficiency.
In medical scenarios, edgeAI can generate personalized treatment recommendations based on examination results combined with patient historical data, significantly improving hospital treatment efficiency.
In smart grid scenarios, edgeAI can analyze power load data in real-time, dynamically adjusting power supply strategies to enhance power distribution efficiency; edgeAI can also analyze drone power grid inspection video streams, improving inspection efficiency.
In agricultural scenarios, edgeAI can generate fertilization plans in real-time based on data returned from sensors and video streams, reducing fertilizer usage, and can use drones to accurately harvest ripe fruits, enhancing production efficiency.
EdgeAI chips, the “new infrastructure” of the intelligent era
As the physical carrier of edgeAI chips, there are inherent requirements for high performance and low power consumption. The level of performance represents the quality and speed ofAI capabilities, while low power consumption determines the richness of edgeAI application scenarios. For example, in gasoline vehicles, there are no expensive water cooling conditions, only passive cooling methods can be used; similarly, in outdoor scenarios, complex cooling methods cannot be used, which creates a clear demand for low power consumption. If power consumption is high, it cannot be used in these scenarios.
High-performance, low-power edgeAI chips require targeted design, specifically needing dedicatedAI processor chips to maximize the energy consumption ratio of data computation and minimize the energy consumption ratio of data transport and shaping. At the same time, using operators as the instruction set increases the granularity of instruction design, opening up the design space of underlying microarchitecture, where data stream microarchitecture can be adopted.
During customer usage, edgeAI chip-related toolchains will inevitably be used. From the perspective of FPS/$, the maturity and usability of the toolchain are crucial for customers. A complete toolchain can significantly reduce customers’ R&D costs and time, accelerating the deployment ofAI applications. By collaborating with the open-source community and partners, the functionality of the toolchain can be continuously enriched, improving its usability and user-friendliness.
The large-scale deployment of edgeAI marks a new stage in the intelligent era. From algorithm innovation to industry applications, from chip design to toolchain improvement, all links work together to build the ecosystem of edgeAI. With continuous technological advancements, edgeAI will play an important role in more fields, bringing more possibilities for industrial development.
Author: Liu Jianwei, Co-founder and Vice President of Aixin Yuanzhi
(Editor: Franklin)