Author: Sophia IoT Think Tank Original
As generative artificial intelligence stirs a global technological wave, another, more “low-key” yet equally critical technological direction is quietly rising: Edge AI, or as it is popularly known this year, On-Device AI. If Edge AI focuses on the decentralization of computing resources, then On-Device AI primarily involves the direct deployment and operation of artificial intelligence models on devices, enabling rapid data processing and intelligent decision-making. Both belong to different levels of “distributed intelligent computing” and have significant overlap in their connotations.
Recently, whether it is NXP’s hefty acquisition of Kinara or Qualcomm’s announcement of acquiring Edge Impulse, the frequent actions of tech giants reveal a clear signal: Edge AI/On-Device AI is transitioning from “proof of concept” to “strategic pillar.” This technological migration is not only an inevitable trend of computational power reconstruction but also a new round of global competition surrounding data sovereignty, real-time decision-making, and energy efficiency balance. Therefore, this article will discuss the strategic considerations behind tech giants’ willingness to invest heavily in star companies in the Edge AI/On-Device AI field.
Tech Giants Compete to Lay Out Edge AI/On-Device AI
On February 11, 2025, Dutch chip manufacturer NXP announced it would spend $307 million to acquire Kinara Inc., a California-based Edge AI chip startup focused on developing neural processing units (NPUs) for AI workloads at the network edge.
Reportedly, Kinara’s discrete NPUs (including Ara-1 and Ara-2) rank among the industry’s best in performance and energy efficiency, making them highly suitable for emerging AI applications such as vision, voice, and gesture, as well as various multimodal implementations driven by generative AI. Both devices utilize innovative architectures that can map inference graphs to efficiently execute on Kinara’s programmable proprietary NPUs, maximizing Edge AI performance. Additionally, Kinara offers a rich software development kit to help customers optimize AI model performance and simplify deployment processes.
NXP stated that by combining Kinara’s discrete NPUs with its own processors, connectivity, and security software product portfolio, this acquisition will help provide “a complete and scalable AI platform from TinyML to generative AI.”
Not only NXP values TinyML, on March 11, 2025, Qualcomm announced the acquisition of Edge AI technology company Edge Impulse, aiming to integrate Edge Impulse’s Edge AI development platform to enhance Qualcomm’s capabilities in artificial intelligence (AI) and the Internet of Things (IoT).
Edge Impulse is a star company in the TinyML (tiny machine learning) field, founded in 2019 by Zach Shelby and Jan Jongboom, who realized that the computational power of microcontrollers had developed to the point where specific domain AI models could be run, allowing these models to be executed directly on devices. Despite the hardware conditions being mature, there was a lack of a simple method to build, optimize, and deploy edge and domain-specific AI models to these devices. Thus, Edge Impulse was born, with its AIoT platform significantly shortening the time required to create machine learning models for small devices such as sensors, microcontrollers, and cameras.
Through this acquisition, Qualcomm can incorporate Edge Impulse’s end-to-end Edge AI development platform into its IoT ecosystem.
In addition to these widely noticed large acquisitions in recent months, looking further back —
In August 2024, Amazon acquired chip manufacturer and AI model compression company Perceive for $80 million in cash. Perceive is a company developing breakthrough neural network inference solutions focused on providing large AI models on edge devices, with its flagship product being the Ergo AI processor, which can run data center-level neural networks in various environments, even under power constraints.
In July 2023, NVIDIA acquired AI startup OmniML, which launched a platform called Omnimizer that can compress the size of machine learning models so that large models can run on smaller devices without relying on cloud computing power.
As future trends become increasingly evident, tech giants are also moving more swiftly, continuously consolidating their leadership in the AI field through the acquisition and integration of Edge AI/On-Device AI technologies, meeting the growing market demand, and driving technological innovation and application implementation.
What Strategic Considerations Are Behind This?
By analyzing the aforementioned acquisitions, we can gain some basic insights:
① Semiconductor Giants Are the Most Active
Among many tech giants, semiconductor giants are the most responsive and swift group regarding Edge AI/On-Device AI. Semiconductor giants quickly acquire core technologies through mergers and acquisitions to seize high-growth market opportunities.
From a strategic perspective, on one hand, cloud AI faces challenges such as high computational costs, latency, and privacy risks, while Edge AI/On-Device AI significantly reduces bandwidth requirements and response times by processing data locally, enhancing data privacy protection and lowering costs. Semiconductor companies help customers break through cloud limitations by providing edge chips and solutions. Meanwhile, in the context of the traditional PC and mobile terminal markets becoming saturated, Edge AI/On-Device AI offers chip manufacturers a new incremental market, including high-potential scenarios such as industrial automation, smart cities, intelligent driving, and smart security. For semiconductor manufacturers like Qualcomm, NXP, Intel, and AMD, laying out Edge AI/On-Device AI is not only about seizing the next generation of AI computational power entry but also a realistic path to extend the benefits of Moore’s Law.
For instance, Kinara, which NXP acquired at a high price, can achieve high energy-efficient AI performance across a range of neural networks, including traditional AI and generative AI, meeting the rapidly growing AI demands in industrial and automotive markets. Similarly, Qualcomm’s acquisition of Edge Impulse will enhance its ability to provide comprehensive technology for key areas such as retail, security, energy and utilities, supply chain management, and asset management.
Clearly, the development of Edge AI is driving hardware ecosystem transformation, becoming a “growth springboard” for chip companies.
On the other hand, the requirements of Edge AI/On-Device AI for high-performance chips, low-latency processing, and security directly depend on advancements in semiconductor processes and hardware innovation. As AI penetrates from the cloud to the edge, semiconductor companies, through technological investments and ecosystem collaborations, not only consolidate their core position in the industry chain but also promote the transformation and upgrading of the entire industry. In the future, as edge AI devices (such as smartphones and embodied robots) become more prevalent, the semiconductor industry will further benefit from this technological wave.
② TinyML Is an Important Pathway for Giants in On-Device AI
In the aforementioned acquisitions, an important technology is involved — TinyML. TinyML refers to the technology of deploying machine learning models on resource-constrained embedded devices (such as microcontrollers, sensor modules, etc.) to achieve localized, low-power, low-latency intelligent inference.
From the actions of the giants, it is evident that as technology continues to mature, TinyML will become an important pathway for Edge AI/On-Device AI, and will be widely applied in scenarios such as smart homes, wearable devices, and industrial IoT, where cost, power consumption, and response speed are highly sensitive.
The core goal of TinyML is to run machine learning models on microcontrollers (MCUs) with milliwatt (mW) level power consumption, making it the only technology path that can stably run AI models on hardware with milliwatt-level energy consumption, kilobyte-level storage, and extremely low cost, naturally adapting to the typical deployment needs of Edge AI/On-Device AI.
As AI applications shift from “centralized cloud inference” to “edge intelligence and on-device intelligence,” the focus of intelligent deployment is also shifting, and the deployment platforms supported by TinyML (such as Cortex-M series, RISC-V MCUs, DSP cores, etc.) are the terminal nodes in the distributed intelligent computing network. TinyML enables these tiny nodes to possess basic AI perception and judgment capabilities, achieving true “ubiquitous intelligence.”
Moreover, traditional AI has often been limited to large models and high-performance computing centers in the cloud, with high technical barriers and development costs, while the TinyML ecosystem is promoting: ① low-code/visual modeling tools: non-AI professionals can also train and deploy models; ② generalization of hardware platforms: one-click model migration; ③ separation of training and inference: models can be trained in the cloud while independently inferring on devices, reducing device costs. This makes it truly possible for AI to be developed and deployed “like embedded software,” greatly accelerating the popularization of edge intelligence.
Based on its unique advantages in low power consumption, real-time performance, privacy protection, hardware adaptability, and a wide range of application scenarios, it can be said that TinyML is one of the core technological supports for Edge/On-Device AI. Without TinyML, there would be no true “ubiquitous intelligence at the edge” or “universal AI on-device.”
③ Emphasizing Complementarity of Small and Large Models + Edge-Cloud Collaboration
While Edge/On-Device AI is promising, we must also recognize its limited understanding capabilities, making it difficult to handle complex reasoning and semantic understanding tasks; large models have strong capabilities but incur huge computational costs, making them difficult to deploy at the edge or on devices, and unsuitable for real-time responses. Therefore, large models dominate cognitive reasoning and complex generative tasks, while lightweight models at the edge dominate real-time perception and local responses, and the collaboration between the two perfectly complements each other.
In the future ubiquitous intelligent architecture, lightweight models at the edge provide rapid responses, while large models offer deep understanding; the collaboration between the two is essential to build an efficient and reliable AI system. Tech giants are accelerating their comprehensive layout of edge deployment capabilities and model management platforms precisely because they recognize this evolutionary trend.
For example, shortly after Qualcomm’s acquisition of Edge Impulse, on April 2, it announced the acquisition of Vietnamese AI startup MovianAI. This company was formerly the generative AI department of VinAI Application and Research Joint Stock Company (VinAI), a leading AI research company known for its expertise in generative AI, machine learning, computer vision, and natural language processing.
This transaction will create a synergistic effect with Qualcomm’s previous acquisition of Edge Impulse: MovianAI focuses on large-scale AI models for smartphones and automobiles, while Edge Impulse optimizes AI for resource-constrained small devices, providing Qualcomm with a new “end-to-end” expertise across the entire AI value chain.
Conclusion
According to new research by Future Market Insights, the Edge AI market is expected to reach $39.6 billion by the end of 2032, expanding at a CAGR of 20.8% during the forecast period.
As the market grows, the development of Edge/On-Device AI is moving towards a higher level of intelligent collaboration, while the continuous evolution of chip architecture optimization, TinyML technology maturity, and cloud-edge collaboration mechanisms will contribute to forming a ubiquitous AI landscape where “local is intelligent, and cloud is smarter.” For companies looking to seize opportunities in the AIoT era, early layout of edge intelligence and model collaboration architecture has become an indispensable strategic choice.
References:
Qualcomm Acquires Edge AI Development Platform Edge Impulse to Strengthen AIoT, Electronic Engineering MagazineNXP Plans to Acquire Edge AI Pioneer Kinara, Redefining Intelligent Edge, NXPNXP Acquires Kinara for $307 Million, Laying Out Edge Network AI Computing Business, ZDNetQualcomm Buys MovianAI to Boost Generative AI Capabilities, RCRWirelessNewsGiants Enter TinyML, Edge and On-Device AI Welcome New Turning Point, IoT Think Tank