Skip to content
Industry 4.0 applications generate vast amounts of complex data—big data. With an increasing number of sensors and available data sources, there is often a demand for more detailed virtual views of machines, systems, and processes. This naturally increases the potential for added value along the entire value chain. However, questions about how to mine this value continue to arise. After all, the systems and architectures used for data processing are becoming increasingly complex. Only by using relevant, high-quality, and useful data—that is, smart data—can we tap into the relevant economic potential.
Challenges
Collecting all possible data and storing it in the cloud, hoping to evaluate, analyze, and build upon it later, remains a widely adopted method for mining data value, but it is not particularly effective. The potential for extracting added value from data has not been fully realized, and finding solutions later becomes increasingly complicated. A better alternative is to consider early on which information is relevant to applications and at which point in the data stream information can be extracted. This can be likened to refining data, that is, extracting smart data from the big data of the entire processing chain. At the application layer, it can be decided which AI algorithms have a higher probability of success for individual processing steps. This decision depends on boundary conditions, such as available data, application types, and the background information related to physical layer processing of available sensor models.
(Image source: ADI Company)
For independent processing steps, correctly processing and interpreting data is crucial for generating real added value from sensor signals. Depending on the application, correctly interpreting discrete sensor data and extracting the required information can be challenging. Temporal behavior often plays a role and directly impacts the information needed. Additionally, the dependencies between multiple sensors must often be considered. For complex tasks, simple thresholds and manually determined logic are no longer sufficient.
AI Algorithms
In contrast, data processing through AI algorithms can automatically analyze complex sensor data. Through this analysis, the required information can be automatically obtained from the data in the processing chain, thus generating added value.
For model building, which is always part of AI algorithms, there are fundamentally two different approaches.
One approach is modeling through explicit relationships between sensor data and the required information using formulas. These methods require providing physical background information in the form of mathematical descriptions. These so-called model-based methods combine sensor data with this background information to produce more accurate results for the required information. The most well-known example here is the Kalman filter.
If there is data but no background information that can be described in mathematical equation form, the so-called data-driven methods must be chosen. These algorithms directly extract the required information from the data. They encompass all machine learning methods, including linear regression, neural networks, random forests, and hidden Markov models.
The choice of which AI algorithm to use often depends on the existing knowledge about the application. If there is extensive expertise, AI will play a greater supporting role, and the algorithms used will be more elementary. If there is no expertise, the AI algorithms used may be much more complex. In many cases, the hardware defined by the application limits the AI algorithms.
Embedded, Edge, or Cloud Implementation
The overall data processing chain, containing all algorithms required for each individual step, must be implemented in a way that can generate added value as much as possible. Typically, this is implemented at an overall level—from small sensors with limited computing resources to gateways and edge computers, and then to large cloud computers. It is clear that these algorithms should not only be implemented at one level. Implementing algorithms as close to the sensors as possible is often more advantageous. This way, data can be compressed and refined at an early stage, reducing communication and storage costs. Additionally, by extracting basic information from the data early on, developing global algorithms at higher levels becomes less complex. In most cases, algorithms in the stream analysis area also help to avoid unnecessary data storage, thereby reducing data transmission and storage costs. These algorithms use each data point only once; that is, they extract complete information directly without needing to store the data.
Processing AI algorithms at the endpoint (e.g., embedded AI) requires embedded processors, as well as analog and digital peripherals for data acquisition, processing, control, and connectivity. The processors also need to be capable of capturing and processing local data in real-time and have the computational resources to execute advanced intelligent AI algorithms. For example, ADI’s ADuCM4050, based on the ARM Cortex-M4F architecture, provides an integrated and energy-efficient way to embed AI.
Implementing embedded AI is far more than simply adopting microcontrollers. To accelerate design, many silicon chip manufacturers have built development and evaluation platforms, such as EV-COG-AD4050LZ. These platforms combine microcontrollers with components such as sensors and HF transceivers, allowing engineers to explore embedded AI without needing to master multiple technologies in depth. These platforms are scalable, enabling developers to use different sensors and other components. For example, by using the EV-GEAR-MEMS1Z expansion board, engineers can quickly evaluate different MEMS technologies, such as the ADXL35x series (including ADXL355) used in that expansion board, which provides excellent vibration compensation, long-term repeatability, and low noise performance, all in a compact size.
The combination of platforms and expansion boards (such as EV-COG-AD4050LZ and EV-GEAR-MEMS1Z) allows engineers to understand structural health based on vibration, noise, and temperature analysis, as well as implement machine condition monitoring. Other sensors can also be connected to the platform as needed, so that the AI methods used can better estimate the current situation through so-called multi-sensor data fusion. This way, various operating states and fault conditions can be classified with better granularity and higher probability. Through intelligent signal processing on the platform, big data is transformed into smart data locally, ensuring that only data relevant to the application case is sent to the edge or cloud.
The platform approach can also simplify communication, as expansion boards can be used to implement different wireless communications. For example, the EV-COG-SMARTMESH1Z features high reliability, robustness, and extremely low power characteristics, supporting communication protocols suitable for a wide range of industrial applications, such as 6LoWPAN and 802.15.4e. The SmartMesh IP network consists of a highly scalable, self-forming multi-hop mesh network of wireless nodes responsible for collecting and relaying data. The network manager monitors and manages network performance and security and exchanges data with host applications.
Especially for battery-powered wireless condition monitoring systems, embedded AI can achieve complete added value. By converting sensor data into smart data locally with the AI algorithms embedded in the ADuCM4050, the data flow is lower compared to directly transmitting sensor data to the edge or cloud, resulting in lower power consumption.
Applications
AI algorithm development platforms (including AI algorithms developed for them) are widely used in the fields of machine, system, structure, and process control, ranging from simple anomaly detection to complex fault diagnosis. By integrating accelerometers, microphones, and temperature sensors, various functions can be realized, such as monitoring vibrations and noise from various industrial machines and systems. Embedded AI can be used to detect process states, bearing or stator damage, control failures of electronic devices, or even unknown system behavior changes caused by electronic device failures. If predictive models apply to specific damages, these damages can even be predicted locally. This way, maintenance measures can be taken at an early stage to avoid unnecessary fault-based failures. If no predictive model exists, the platform can also assist subject matter experts in continuously understanding machine behavior and over time deriving a comprehensive machine model for predictive maintenance.
Ideally, through corresponding local data analysis, embedded AI algorithms should be able to determine which sensors are relevant to their respective applications and which algorithms are best suited for them. This means the platform possesses intelligent scalability. Currently, academic experts still have to find the ideal algorithm for their respective applications, although only a little implementation work is needed for various machine condition monitoring applications to scale the AI algorithms.
Embedded AI should also make decisions about data quality, finding and making appropriate settings for sensors and the entire signal processing if data quality is poor. If multi-sensor modes are used for fusion, AI algorithms can compensate for shortcomings of certain sensors and methods. In this way, data quality and system reliability can be improved. If sensors are classified by AI algorithms as less relevant to the application, their data flow will be controlled accordingly.
(Original reference: Turning big data into smart data with embedded AI)
