Article Overview
This article discusses how edge AI technology is fundamentally transforming real-time data processing and automation. It emphasizes the value of edge AI in real-time processing for critical applications, discusses methods for generating edge AI models, including the use of integrated development environments and SaaS platforms, and the application of supervised training in edge AI. The article further explores the hardware that runs edge AI workloads, mentioning the role of high-speed cameras with ML models and low-cost AI vision modules in the supply chain, as well as the development of AI-supporting ML sensors.
From smart home assistants (like Alexa, Google, and Siri) to advanced driver-assistance systems (ADAS) that can alert drivers of lane departures, the world relies on edge AI to provide real-time processing capabilities for these increasingly popular devices. Edge AI utilizes artificial intelligence directly within devices, performing computations near the data source without relying on remote data center cloud computing. Edge AI brings lower latency and faster processing speeds, reduces dependence on continuous internet connectivity, and lessens privacy concerns. This technology represents a significant shift in data processing methods, and as the demand for real-time intelligence grows, edge AI is poised to continue exerting its powerful influence across many industries.
The greatest value of edge AI lies in its ability to bring speed to critical applications. Unlike cloud/data center AI, edge AI does not send data over network links and expect reasonable response times. Instead, edge AI performs computations locally (often on a real-time operating system), excelling at providing timely responses. For scenarios such as applying machine vision on factory production lines to determine within a second whether a product can be diverted, edge AI is fully capable. Similarly, you would not want the signals from a car to depend on the response time of a network or cloud server.
Edge AI for Real-Time Processing
Many real-time activities are driving the demand for edge AI. Applications such as smart home assistants, ADAS, patient monitoring, and predictive maintenance are notable uses of this technology. From quick responses to home issues to alerts for lane departures in vehicles, to sending blood sugar readings to smartphones, edge AI provides rapid responses while minimizing privacy concerns.
For quite some time, we have seen edge AI perform well in the supply chain, particularly in warehousing and manufacturing. Over the past decade, the transportation industry has also made significant technological advancements, such as delivery drones capable of navigating through cloud cover. Edge AI has also brought tremendous benefits to engineers, especially in the medical technology field, which is a critical area of advancement. For example, engineers developing pacemakers and other cardiac devices can provide tools for doctors to identify abnormal heart rhythms while also proactively programming the devices to guide when to seek further medical intervention. The medical technology sector will continue to increase its use of edge AI and further enhance its capabilities.
Generating Edge AI Models
Now, more and more systems in daily life have some degree of machine learning (ML) interaction, making it crucial for engineers and developers to understand this world for planning future user interactions.
The greatest opportunity for edge AI lies in ML based on statistical algorithms for pattern matching. These patterns can be the perception of a person’s presence, someone just saying the “wake word” (like Alexa or ‘Hey Siri’), or a motor starting to shake. For smart home assistants, the wake word is a model running at the edge, eliminating the need to send voice data to the cloud. It can wake the device, letting it know it’s time to issue further commands.
There are several ways to generate machine learning (ML) models: either using integrated development environments (like TensorFlow or PyTorch) or using SaaS platforms (like Edge Impulse). Much of the “work” in building a good ML model is creating a representative dataset and properly labeling it.
Currently, the most popular ML models in edge AI are supervised models. These models are trained on labeled and annotated sample data, with outputs being known values that can be checked for correctness, much like a teacher grading assignments. This type of training is typically used in applications such as classification tasks or data regression. Supervised training is very useful and has high accuracy, but it largely depends on labeled datasets and may struggle with new inputs.
Hardware for Running Edge AI Workloads
At DigiKey, we are fully capable of assisting with edge AI, as they typically run on microcontrollers, FPGAs, and single-board computers (SBCs). DigiKey collaborates with top suppliers to provide generations of hardware capable of running ML models at the edge. This year, we have witnessed the release of some groundbreaking new hardware, including NXP’s MCX-N series, and we will soon stock ST Microelectronics’ STM32MP25 series.
In recent years, development boards from the maker community have been popular for running edge AI, including SparkFun’s Edge Development Board Apollo3 Blue, AdaFruit’s EdgeBadge, Arduino’s Nano 33 BLE Sense Rev 2, and Raspberry Pi 4 or 5 models.
Neural Processing Units (NPUs) are increasingly being applied in the field of edge AI. Neural Processing Units are specialized integrated circuits designed to accelerate the processing speed of ML and AI applications based on neural networks. Neural networks are structured based on the human brain, with many interconnected layers and nodes called neurons that process and transmit information. Next-generation NPUs are being developed with specialized mathematical processing capabilities, including NXP’s MCX N series and ADI’s MAX78000.
We are also seeing AI accelerators for edge devices, a field still being defined, with early companies to watch including Google Coral and Hailo.
The Importance of ML Sensors
High-speed cameras with ML models have been playing a role in the supply chain for quite some time. They have been used to determine where products in warehouses should be sent or to identify defective products on production lines. We see suppliers developing low-cost AI vision modules that can run ML models to identify objects or people.
While running ML models requires embedded systems, more products will continue to be launched as AI-supporting electronic components. This includes AI-supporting sensors, also known as ML sensors. While adding an ML model to most sensors does not enhance their application efficiency, several types of sensors can significantly improve their operational efficiency through ML training:
-
Camera sensors that can develop ML models to track objects and people in the frame
- IMUs, accelerometers, and motion sensors for detecting activity
Some AI sensors come pre-installed with ML models that can run at any time. For example, the SparkFun evaluation board for human detection is pre-programmed for face detection and returns information via the QWiiC I2C interface. Some AI sensors, like Arduino’s Nicla Vision or Seeed Technology’s OpenMV Cam H7, are more open-ended and require trained ML models to be established for the objects being sought (defects, objects, etc.).
By using neural networks to provide computational algorithms, detection and tracking can occur when objects and people enter the field of view of camera sensors.
The Future of Edge AI
As many industries evolve and rely on data processing technologies, edge AI will continue to see broader applications. By enabling faster and safer data processing at the device level, innovations in edge AI will be profound. We believe several areas will expand in the near future:
1. Dedicated processor logic for computing neural network algorithms.
2. Advances in low-power alternatives compared to the massive energy consumption of cloud computing.
3. More integrated/module options, such as AI vision components, will include built-in sensors and embedded hardware.
With the continuous evolution of ML training methods, hardware, and software, edge AI is well-positioned for exponential growth, supporting numerous industries. At DigiKey, we are committed to staying at the forefront of edge AI trends, and we look forward to supporting innovative engineers, designers, builders, and procurement professionals worldwide with a wealth of solutions, seamless interactions, powerful tools, and rich educational resources to make their work more efficient.
For more information, products, and resources on edge AI, please click“Read More” to visit DigiKey’s related page
About the AuthorShawn Luke
DigiKey Technical Marketing Engineer
Shawn Luke is a Technical Marketing Engineer at DigiKey . DigiKey is a globally recognized leader and continuous innovator in the cutting-edge business distribution of electronic components and automation products, offering over 15.6 million components from more than 3,000 quality brand manufacturers. Editor’s NoteAs the author states, edge AI is driving innovation and development in real-time data processing and automation by enabling faster and safer data processing. The future expansion of this innovation will drive the development of dedicated processor logic, advances in low-power alternatives, and the emergence of more integrated/module options. It is foreseeable that edge AI will continue to see broader applications across multiple industries. What are your thoughts on the impact of edge AI innovations on technology and product ecosystems? What experiences or challenges have you encountered in developing related systems? Feel free to leave a comment and share your thoughts!
“Star” us to not miss fresh cases and industry insights