How Brain Research Inspires Ultra-Low Power AI Models and Algorithms

How Brain Research Inspires Ultra-Low Power AI Models and Algorithms

Introduction

How brain research inspires better AI design is a key topic for our AI by Complexity reading group. The tenth session will be shared by Professor Yu Guo from Fudan University, who will provide a theoretical analysis of phenomena such as high sparsity, variability, and energy efficiency, deepening our understanding of the efficient functioning of the brain and offering insights for designing ultra-low power next-generation AI models and algorithms.
How Brain Research Inspires Ultra-Low Power AI Models and Algorithms
How Brain Research Inspires Ultra-Low Power AI Models and Algorithms

Content Summary

How cortical neurons effectively represent sensory stimuli through dynamic firing patterns is a crucial scientific question. Over the past 30 years, experimental and computational neuroscientists have discovered phenomena such as high sparsity, variability, and energy efficiency in the signal learning process of neurons and networks. There is significant debate over whether the high variability of neural responses reflects the encoding capabilities of the nervous system itself or inherent randomness. We attempt a theoretical analysis of the intrinsic relationships among these multiple features. By defining three concepts, we conduct mathematical derivations and analyses, discovering that the various typical statistical distributions of neural response activities under changing stimulus conditions exhibit a perfect functional relationship between the sparsity of neural activity and the coefficient of variation; thus, we demonstrate that the essence of these two concepts is actually the energy efficiency ratio of information representation, that is, the amount of information the nervous system can represent given a specific energy consumption. Therefore, the high variability exhibited by neural networks during the learning process is essentially a characteristic of efficient sparse coding, which maximizes the amount of information representation under given energy consumption conditions. This fundamental theoretical framework helps to comprehensively understand the efficient learning and network evolution trends of the brain, providing valuable insights for designing brain-inspired ultra-low power next-generation artificial intelligence models and algorithms.

Keywords

Sparsity; Irregularity; High Energy Efficiency; Neural Encoding

Sharing Outline

1. Scientific background of the research

2. Variability and sparsity of neural spike discharge activities (S-CV) and their equivalence

3. Indicators of high energy-efficient coding capabilities in neural networks: discharge irregularity and sparsity

4. The essence of S-CV and its inspiration for AI models and the development of high energy-efficient reliable algorithms

References

  1. Huang, M., Lin, W., Roe, A., & Yu, Y. (2024) A Unified Theory of Response Sparsity and Variability for Energy-Efficient Neural Coding. Preprint on BIORXIV/2024/614987.
  2. Rolls, E. T., & Tovee, M. J. (1995). Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. Journal of Neurophysiology, 73, 713-726.
  3. Treves, A., & Rolls, E. (2009). What determines the capacity of autoassociative memories in the brain? Network Computation in Neural Systems, 2, 371-397.
  4. Haider, B., Krause, M. R., Duque, A., et al. (2010). Synaptic and network mechanisms of sparse and reliable visual cortical activity during nonclassical receptive field stimulation. Neuron, 65, 107-121.
  5. Olshausen, B. A., & Field, D. J. (2004). Sparse coding of sensory inputs. Current Opinion in Neurobiology, 14, 481-487.
  6. Softky, W. R., & Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. Journal of Neuroscience, 13, 334-350.
  7. Lengler, J., & Steger, A. (2017). Note on the coefficient of variations of neuronal spike trains. Biological Cybernetics, 111, 229-235.
  8. Yu, Y., Migliore, M., Hines, M. L., & Shepherd, G. M. (2014). Sparse coding and lateral inhibition arising from balanced and unbalanced dendrodendritic excitation and inhibition. Journal of Neuroscience, 34, 13701–13713.
  9. Yu, Y., McTavish, T. S., Hines, M. L., Shepherd, G. M., Valenti, C., & Migliore, M. (2013). Sparse distributed representation of odors in a large-scale olfactory bulb circuit. PLoS Computational Biology, 9, e1003014.
  10. Marder, E. (2011). Variability, compensation, and modulation in neurons and circuits. Proceedings of the National Academy of Sciences, 108, 15542-15548.
  11. Faisal, A. A., Selen, L. P., & Wolpert, D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9, 292-303.

Speaker

How Brain Research Inspires Ultra-Low Power AI Models and Algorithms
Yu Guo, Professor at Fudan University, researcher at the National Key Laboratory of Medical Neurobiology, and dual-hired researcher. PhD in Physics from Nanjing University (2001), Postdoctoral Fellow in Computational Neuroscience at Carnegie Mellon University (2001-2004), Postdoctoral Fellow and Associate Researcher at Yale University School of Medicine (2005-2011). Member of the Chinese Society for Computational Neuroscience, the Brain-Machine Integration and Bio-Machine Intelligence Committee, the Biomedical Engineering and Automation Control Committee, and associate editor for journals such as IEEE Transactions on Cognitive and Developmental Systems, Frontiers in Computational Neuroscience, and Cognitive Neurody. He has led over 10 major projects funded by the Ministry of Science and Technology and the National Natural Science Foundation. He has published over 70 SCI papers in journals including Nature, PNAS, Neuron, Physical Review Letters, Journal of Neuroscience, and PLoS Computational Biology. Research directions: Theory of intelligent complex systems inspired by the brain and neural computational mechanisms.
Pattern homepage: https://pattern.swarma.org/user/13970

Live Information

Live time:September 28 (Saturday) 15:00-17:00

Register for live broadcast:
How Brain Research Inspires Ultra-Low Power AI Models and Algorithms
Register to participate in the reading group for online discussions via Tencent Meeting:
How Brain Research Inspires Ultra-Low Power AI Models and Algorithms
Pattern link: https://pattern.swarma.org/study_group/45?from=wechat
Scan the code to participate in the AI by Complexity reading group chat, gain access to series reading group replays, join the AI by Complexity community, and communicate with frontline researchers in the community to explore the development of the intersection of complex science and AI.
Sign up to become a speaker
All reading group members can apply to become a speaker during the reading group. Speakers, as members of the reading group, follow the content co-creation and sharing mechanism, can receive a refund of the registration fee, and share all content resources generated from this reading group. For details, see: AI by Complexity reading group launch: How to quantify and drive the next generation of AI systems through complexity.

AI By Complexity Reading Group is Recruiting

Large models, multi-modal, and multi-agent systems are emerging one after another, with various neural network variants showcasing their capabilities on the AI stage. The exploration of emergent, hierarchical, robustness, nonlinearity, and evolution issues in the field of complex systems is ongoing. Excellent AI systems and innovative neural networks often exhibit characteristics of excellent complex systems to some extent. Therefore, how the developing theory and methods of complex systems can guide the design of future AI is becoming a highly focused issue.

The Intelligence Club, in collaboration with Assistant Professor You Yizhuang from the University of California, San Diego, Associate Professor Liu Yu from Beijing Normal University, PhD student Zhang Zhang from the School of Systems Science at Beijing Normal University, and Master students Mu Yun and Yang Mingzhe, and PhD student Tian Yang from Tsinghua University, jointly initiated the “AI By Complexity” reading group to explore how to measure the “goodness” of complex systems? How to understand the mechanisms of complex systems? Can these understandings inspire us to design better AI models? Essentially helping us design better AI systems. The reading group started on June 10, held every Monday evening from 20:00 to 22:00. Friends engaged in related research fields or interested in AI+Complexity are welcome to sign up for the reading group for discussions!

How Brain Research Inspires Ultra-Low Power AI Models and Algorithms

For details, see:
AI by Complexity reading group launch: How to quantify and drive the next generation of AI systems through complexity.

Previous Shares:

  1. First session: Zhang Zhang, Yu Guo, Tian Yang, Mu Yun, Liu Yu, Yang Mingzhe: How to quantify and drive the next generation of AI systems through complexity.
  2. Second session: Xu Yizhou, Weng Kangyu: Research on structured noise and neural network initialization from the perspective of statistical physics and information theory.
  3. Third session: Liu Yu: “Compression is Intelligence” and Algorithmic Information Theory.
  4. Fourth session: Cheng Aohua, Xiong Wei: From high-order interactions to neural operator models: Inspiring better AI.
  5. Fifth session: Jiang Chunheng: Network properties determine the performance of neural network models.
  6. Sixth session: Yang Mingzhe: Causal emergence inspiring AI’s “thinking”.
  7. Seventh session: Zhu Qunxi: From complex systems to generative artificial intelligence.
  8. Ninth session: Lan Yueheng: Life, intelligent emergence, and complex systems research.

Click “Read the original text” to register for the reading group.

Leave a Comment