Neuromorphic Event-Driven Vision Sensors for Motion Recognition

Click the blue words to follow us

Neuromorphic Event-Driven Vision Sensors for Motion Recognition
Research Background
The traditional frame-based image sensors transmit absolute light intensity at a fixed frame rate, regardless of whether features change, leading to a large amount of redundant visual data with limited information. Inspired by biological retinas, event-driven vision sensors respond only to relevant changes in the scene. These sensors emit spike pulses only when and where events occur, thereby reducing redundant data while retaining sparse but important information. However, there is usually a physical distance between event-driven cameras and subsequent information processing units. Existing event-driven cameras, such as Dynamic Vision Sensors (DVS), monitor brightness changes in analog format and send them to digital neuromorphic processors or Spiking Neural Networks (SNN). To perform motion recognition, sufficient visual data must be converted between the DVS and the processing unit, leading to time delays and energy consumption, which offset some advantages of the DVS concept.
Processing visual information directly within the perception terminal can bring advantages in time and energy efficiency. Visual preprocessing at the perception terminal can enhance contrast, reduce noise, adapt to vision, etc., thereby improving the accuracy of image recognition. To perform advanced computations such as image recognition within the sensor, photodetectors can be used for Artificial Neural Networks (ANN), where the adjustable photoluminescence rate (R) can simulate synaptic weights. In these cases, all pixels continuously output a constant photocurrent value based on absolute light intensity. By constructing in-sensor ANNs by connecting photodetectors in series, static image recognition can be achieved. These in-sensor ANNs respond to absolute light intensity but lack event-driven characteristics, resulting in a large amount of redundant information when processing dynamic motion.
Research Findings
Neuromorphic event-based image sensors can only capture dynamic motion in the scene and then transmit it to the computing unit for motion recognition. However, this approach leads to time delays and is power-consuming. Here, Professors Chai Yang from Hong Kong Polytechnic University & He Yuhui from Huazhong University of Science and Technology report a computational event-driven vision sensor that can capture dynamic motion and directly convert it into programmable, sparse, and information-rich spike signals. These sensors can form a spiking neural network for motion recognition. Each vision sensor consists of two oppositely polarized parallel photodiodes with a time resolution of 5 μs. In response to changes in light intensity, the sensor generates spike signals of different amplitudes and polarities by electrically programming their respective photoluminescence. The non-volatile and multi-level photoluminescence of the vision sensor can simulate synaptic weights and can be used to create in-sensor spiking neural networks. Our computational event-driven vision sensor approach eliminates redundant data during the sensing process and the need for data transmission between sensors and computing units. Related research was published in “Computational event-driven vision sensors for in-sensor spiking neural networks” in Nature Electronics.
Visual Guide
Neuromorphic Event-Driven Vision Sensors for Motion Recognition
Fig. 1 | Event-driven in-sensor spiking neural network.
Neuromorphic Event-Driven Vision Sensors for Motion Recognition
Fig. 2 | Non-volatile and programmable photoresponsivity of the WSe2 photodiode.
Neuromorphic Event-Driven Vision Sensors for Motion Recognition
Fig. 3 | Tunable event-driven characteristics in the unit based on the WSe2 photodiode.
Neuromorphic Event-Driven Vision Sensors for Motion Recognition
Fig. 4 | In-sensor SNN for motion recognition.
Conclusion and Outlook
The authors report a computational event-driven vision sensor that generates adjustable current spikes only when light intensity changes, with a time resolution of 5 μs. The WSe2 photodiode exhibits non-volatility, programmability, and linear light intensity-dependent photoluminescence, enabling the simulation of synaptic weights in neural networks. By further integrating output neurons, an in-sensor SNN can be constructed with a motion recognition accuracy of up to 92%. Direct motion recognition within event-driven perception terminals has the potential to be used for developing real-time edge computing visual chips.
References
Computational event-driven vision sensors for in-sensor spiking neural networks
https://doi.org/10.1038/s41928-023-01055-2
Recommended Reading
Tough, Self-healing, and Spinnable Ionic Gels Inspired by Spider Silk
Tomorrow’s Stars: Strain Engineering of Perovskites
Microphase-separated Elastic and Super-stretchable Ionic Gels for Multi-modal Perception Ionic Skin
Latest AM Review for Flexible Electronics: Self-healing Hydrogel Bioelectronics
Self-powered Sensing Devices for Multispectral Imaging in Smart Indoor Environments
Video Channel: #Flexible Electronics

Neuromorphic Event-Driven Vision Sensors for Motion Recognition

Leave a Comment