Abstract
Can AI chips be made just with money? Necessary but not sufficient. Only when you are successful enough can you have a name and reputation. Why are there only a handful of global CPU and GPU companies, yet the wave of artificial intelligence has led to the emergence of so many AI chip companies in just a few years? Is it really possible to make AI chips just with money? This article interprets the new challenges in edge AI chip design, whether traditional chip development models are being impacted, how to seek new breakthroughs, and the future directions of edge AI chips. Recommended for everyone.
Can AI chips be made just with money? Necessary but not sufficient. Only when you are successful enough can you have a name and reputation.
Why are there only a handful of global CPU and GPU companies, yet the wave of artificial intelligence has led to the emergence of so many AI chip companies in just a few years? Can AI chips really be made just with money?
This is a “friendship-ending” topic among chip engineers. Proponents argue that compared to high-end general-purpose processors, most AI chips are ASICs, which are much easier to develop; opponents believe that AI chips require specific designs tailored to algorithms, with more limitations in application scenarios, representing a perfect balance of various design philosophies. Especially for edge computing AI chips aimed at fragmented applications, they are not necessarily easier than CPUs or GPUs.
Development is difficult and implementation is harder, consuming time and brainpower—this is a true reflection of edge AI chip development.
01 New Challenges in Edge AI Chip Design?

Image Source | Internet of Business
Edge computing is actually a concept proposed in the early 1990s, originating from data content transmission networks, used to serve web and video content. At that time, it was a service generated from edge servers, closer to users.
Looking back over the past thirty years, the types and volumes of data have changed dramatically. Initially, data was generated by humans (e.g., broadcast, television media), but now a large number of smart terminal users are generating data, along with machine-generated data and metadata (data describing data)… This data must be processed close to where it is generated. This has led to an increasingly diverse range of data types, far beyond what traditional CPUs or GPUs can handle.
Song Jiqiang, Vice President of Intel Research Institute and Director of Intel China Research Institute, stated that edge computing presents a huge innovation opportunity, driven by data and scenarios. It requires extensive vertical integration to optimize the entire system, including algorithms, application loads, and heterogeneous computing integration, rather than just focusing on a part of the underlying technology.
Yu Xin, President of Shiqing Technology, believes that the main challenge for edge computing AI chips is how to improve computational efficiency. The era of simply stacking computational power through advanced processes is over, especially when it comes to applications and algorithms. How to fully utilize the chip’s computational power is a crucial factor determining the future development of AI chips.
Hua Baohong, Deputy General Manager of Lingxi Technology, mentioned three major technical challenges for AI chips:
First is the challenge of Moore’s Law; semiconductor lithography technology is nearing its limits. Currently, improving process levels below 3nm presents significant technical challenges, and balancing cost, market, and demand may become increasingly difficult. Accompanying this are the bottlenecks of the von Neumann architecture: the imbalance between high-speed processors and data I/O bandwidth, and the imbalance between high-speed processors and low energy efficiency storage, all of which severely impact overall efficiency.
Second is the need to balance high energy efficiency and high cost-effectiveness, ensuring that edge users enjoy quality computational services while maintaining reasonable pricing.
Finally, there is a need for adaptability to various complex scenarios in edge computing. How to handle dynamic and complex scenarios? How to solve generalization with limited samples in the absence of big data? How to face real-world data that cannot be exhaustively enumerated? These are all challenges currently faced by AI chips.
Rob Fisher, Director of Product Management for Imagination’s computing business, also emphasized that AI chips must be designed with sufficiently flexible architectures to adapt to new innovations, while achieving efficient acceleration compared to programmable devices. Achieving this balance requires enough fixed functionality to gain performance and efficiency, while also having sufficient programmability to keep pace with the rapid development of new networks and algorithms.
02 Impact on Traditional Chip Development Models
The complexity of edge scenarios, complex algorithms, and fragmented tasks manifest at the hardware level, where the underlying design of chips must consider how to use software to define hardware, leaving ample room for future software development to iterate and adapt to the diversity and complexity of application scenarios; at the software stack level, including instruction sets, low-level compilers, high-level compilers, algorithms, and application framework support, the entire chain must have the capability to cover and support.This means that existing chip development models, whether at the hardware or software level, are facing new impacts.
Hua Baohong from Lingxi Technology believes that “achieving the integration of fragmented industries” is crucial. Chip design must first consider edge computing scenarios and undergo customized optimization; once the chip is fabricated, all subsequent updates for various scenarios and complex tasks will be iterated through software. In other words, the demand for implementation is driven by a software-defined hardware model, which shields the differences in underlying hardware through decoupling technology, enabling the realization of fragmented applications.
In fact, for the myriad of chip architectures, downstream manufacturers often cannot truly “efficiently utilize hardware resources.” Yu Xin from Shiqing Technology stated that this requires chip manufacturers to do more than simply provide a chip; they must offer a complete solution tailored to customer requirements. This presents a higher challenge for chip manufacturers’ algorithm and application development capabilities. “Of course, another approach is to make chips more ‘user-friendly’ for developers,” he added, “These are two directions of effort that are fundamentally not in conflict. After all, the fundamental starting point for chip manufacturers is still how to better cooperate with customers, providing value-added services centered around the chip as the core and foundation.”
03 How to Seek New Breakthroughs

Image Source | Arm Community
Yu Xin from Shiqing Technology believes that AI chip design can follow two major ideas: one is how to be more developer-friendly, allowing for optimized designs that ensure every transistor in the chip is truly utilized; the second is to achieve closer integration of algorithms and chips through dedicated architectures.
From the mainstream directions observed so far, in addition to smart security and intelligent vehicles, the evolution of overall intelligent living is driving the development of edge computing. “It is not necessarily about new scenarios and device forms; the intelligent upgrade of existing terminal devices is also a significant driving force. Of course, this is a process that requires time. The speed of this process depends on whether AI technology brings real convenience and whether it is feasible and practical in terms of cost. From this perspective, both quality and affordability are essential,” Yu Xin added.
Regarding the next breakthrough point, Hua Baohong from Lingxi Technology believes that “we should learn more from the human brain.” Traditionally, relying solely on big data and high computational power may become a thing of the past. To achieve new breakthroughs, new ideas are necessary, and brain-like computing is one of the keys. Historically, evolution from carbon-based to the human brain, the most powerful general intelligence, and from silicon chips to powerful machine intelligence, suggests that developing brain-like computing based on the fundamental principles of brain science to support the development of artificial general intelligence is entirely possible. “Brain-like computing could become a key to breaking through the traditional AI computing bottleneck and opening the door to general artificial intelligence, thus addressing the new challenges posed by the post-Moore era,” he stated. “Additionally, edge computing chips are primarily SoCs, which can be self-developed or directly sourced from third-party IP. In this context, having the capability to independently develop AI computing IP cores has become a core competitive advantage and breakthrough point for enterprises.”
Regarding the driving force behind AI chip applications, Rob Fisher from Imagination believes that natural language processing is a major direction. For example, recent research and development in the field of speech-to-intent engines has achieved the construction of precise systems within extremely low-power architectures, thus accelerating small neural networks. Additionally, autonomous driving systems are pushing chip designers to create extremely fast acceleration architectures to process large amounts of sensor data in real-time.
04 Lack of Motivation for Edge Training
Currently, inference tasks dominate at the edge. If we are to further meet the requirements for low latency and adaptive responses, is it necessary to conduct a certain amount of training tasks at the edge? If training is to be conducted, what challenges need to be addressed?
Zhao Xiaowu, Vice President of Shanghai Xuehu Technology Co., Ltd., believes that the computational power required for training is much greater than that for inference, and there has not been much demand for training at the edge. However, if adaptive learning models can meet the requirements for accuracy, performance, power consumption, and cost of edge computing devices, we can expect to see more application demands in the future.
Yu Xin from Shiqing Technology also believes that there will not be such demand in the short term. The vast majority of training tasks are not very sensitive to latency or even cost, and there is no need to conduct these tasks at the edge. He emphasized that all work tasks still have boundaries and should be completed by leveraging strengths and avoiding weaknesses, which is determined by inherent logic, and there is no need to cross boundaries for the sake of crossing.
Hua Baohong from Lingxi Technology stated that in fields such as vehicle-road-cloud collaboration and autonomous driving, real-time and safety requirements necessitate training at the edge. In the future, demands for online learning, small-sample learning, and dynamic adaptive adjustments in complex scenarios will be addressed through training at the edge.
He believes that on one hand, the edge still relies primarily on traditional chips like CPUs and GPUs, and the high energy consumption and costs associated with training tasks are inherent challenges; on the other hand, the characteristics of edge computing determine that training at the edge will face challenges such as insufficient big data and high costs of offline training time, which severely hinder the development of edge training. For edge training to make significant progress, it relies on the support of new computing architectures, with implementation paths including online learning and small-sample learning.
05 Direction: Towards a Stronger Intelligent Edge

Image Source | Metrology News
The intelligent edge is indeed a buzzword, and the future market space is vast. However, the industry’s current understanding and deployment of it are still exploratory. For example, telecom operators’ Mobile Edge Computing is referred to as the intelligent edge, while some business types in cloud vendors and CDN front-end can also be called intelligent edge. In the future, the digital transformation of various industries will require the deployment of intelligent edges, but it will also depend on specific application needs, such as latency, bandwidth, price, and energy consumption.
In terms of implementation, many intelligent edge capabilities are currently provided in the form of a box (edge computing box), which is very limited in capability, essentially a simple extension of MEC, representing a relatively weak intelligent edge.
Song Jiqiang from Intel stated that if intelligent manufacturing, ports, mines, and other fields are to be utilized in the future, requiring more functions such as networking, computing, and storage, it means handling massive amounts of data and higher AI computational power requirements, along with real-time demands. Therefore, weak edges are certainly insufficient, and there must be an evolution towards stronger edges.
He believes that service robots are a very typical application scenario for edge computing, as the ability to learn continuously is crucial for robots. For example, robots that will enter homes to provide care and companionship in the future need to possess long-term, progressively improving service capabilities, which requires learning gradually from specific scenarios and constructing an understanding of those scenarios. By further analyzing the interconnections within the scenarios, robots can form their own memories, enabling subsequent capability enhancement.
If this learning process relies solely on the robot itself, the hardware requirements would be too high. This necessitates the use of edge computing, which can offload non-immediate response computations to the edge and also utilize edge computing for storage, effectively providing front-end devices like robots with greater storage and computational capacity, akin to having an additional “brain” beyond themselves.
06 In Conclusion
Chips are a manifestation of the extreme development of industrial technology, including requirements for the precision of components, production environments, and manufacturing processes, representing a massive system engineering effort.For AI chips, in addition to facing traditional chip issues, the challenges are even greater.In AI applications, chips are the fundamental components carrying computational functions, while software is the core of realizing AI, a reality that practitioners in the AI chip field must confront.
However, after several generations of the IT industry continuously driving the cloudification and intelligence of everything, we have now reached a critical stage of making everything intelligent. The new era of data abundance is the perfect time for edge computing to take off; the challenges for edge AI chips are significant, but so is the opportunity.
—END—
Join the Group Application
