With the rapid development of robotics and artificial intelligence technologies, intelligent robots are becoming a revolutionary engine driving new productive forces and leading a new round of technological revolution globally. Intelligent robots possess the capabilities of perception, analysis, decision-making, and even autonomous learning, with the main control chip serving as the core computing platform providing powerful computing power. The main control chip of intelligent robots has become a key factor determining their functionality and performance.
As the intelligence level of mobile robots continues to improve, the complex tasks executed by robots, such as perception, localization, mapping, navigation, and interaction, pose two key challenges to the computing power and energy efficiency of the main control chip. On one hand, these complex tasks require processing large amounts of sensor data and frequent memory access, which demands the chip to have high computing power for real-time calculations and quick responses. On the other hand, high computing power leads to high energy consumption and battery energy limitations under mobile conditions, requiring the chip to enhance energy efficiency while ensuring computing power to extend battery life.
Dedicated computing chips can be specifically designed and optimized for certain algorithms, tasks, and requirements, serving as a key approach to address the dual challenges of computing power and energy efficiency. Developing high-efficiency dedicated chips for intelligent robots, particularly for small, micro, and nano-sized intelligent mobile robots with size and energy constraints, has become one of the important trends in current research.
To this end, the 11th issue of “Integrated Circuits and Embedded Systems” has specially launched a “Research Column on High-Efficiency Dedicated Chips for Intelligent Robots”, aiming to provide readers with a comprehensive professional perspective to help them understand the importance of high-efficiency dedicated chips in the field of intelligent robots and their future development direction. The column covers cutting-edge technologies and innovative achievements related to dedicated chips for simultaneous localization and mapping (SLAM), vision chips, and artificial intelligence chips used in intelligent mobile robots, discussing the applications and development trends of high-efficiency dedicated chips in intelligent robots.
We hope this column can promote technical exchanges and academic cooperation in the field of intelligent robots, jointly exploring new routes, architectures, and technologies for the future development of main control chips in intelligent robots, thereby continuously enhancing the independent controllability of China’s dedicated chip technology chain and industrial chain represented by intelligent robots, and promoting the optimization and upgrading of the industry and high-quality economic development in China.
6 articles as follows (click to read the full text or download the complete PDF version):
Liubingqiang1, Shen Zixuan1, Wang Jipeng1, Xiao Jian2, Tan Yulong1, He Zaisheng3, Xu Dengke3, Wang Ke4, Qu Weixin6, Wang Chao1,2, Sun Lining4,5,6
(1. Huazhong University of Science and Technology, School of Optics and Electronics; 2. Huazhong University of Science and Technology, Academy of Future Technology; 3. Zhuhai Yimi Semiconductor Co., Ltd.; 4. Harbin Institute of Technology, National Key Laboratory of Robot Technology and Systems; 5. Soochow University, School of Mechanical and Electrical Engineering; 6. Soochow University, Xiangcheng Robot and Intelligent Equipment Research Institute)
Abstract: Robots are a revolutionary engine of new productive forces, reshaping human life and work. Simultaneous localization and mapping (SLAM) technology enables robots to autonomously navigate in unknown environments and construct maps of their surroundings, serving as the cornerstone of intelligent mobile robots. However, SLAM algorithms are complex and computationally intensive, and implementing them on general-purpose chip solutions faces issues of long delays and high power consumption, which cannot meet the real-time, size, and power requirements of autonomous mobile robots, especially small, micro, and nano robots. Therefore, the design of dedicated chips to accelerate the computation-intensive SLAM algorithms has attracted significant attention from academia and industry in recent years. This paper first introduces the necessity of hardware acceleration for SLAM algorithms starting from the basic concepts and application scenarios of SLAM technology, then reviews the research status and development trends of SLAM technology from the perspectives of algorithms and dedicated chip design, and finally discusses the technical challenges and solutions in the research of SLAM dedicated chips, providing suggestions for future development.
Reference format for this article:Liubingqiang, Shen Zixuan, Wang Jipeng,et al.Research Overview on Dedicated Chips for SLAM in Intelligent Robots[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):1-14.
Chen Zhuoyu, An Fengwei
(Southern University of Science and Technology, Shenzhen-Hong Kong Microelectronics Institute)
Abstract: With the rapid development of the robotics industry, robot technology has become a new driving force for improving productivity, especially the importance of technologies such as 3D reconstruction and obstacle avoidance navigation is becoming increasingly prominent. Active 3D imaging technologies based on Time of Flight (ToF) and structured light are limited by their low resolution, lack of color information, and susceptibility to environmental light interference, resulting in less than ideal performance. Therefore, passive stereo vision sensors that can output dense depth and color information (RGB-D) in real-time have been widely applied in fields such as autonomous mobile robots, automobiles, and micro-drones. However, stereo vision technology, which mimics human eyes to compute disparity for depth information, has high computational complexity and relies on general-purpose computing platforms, leading to high energy consumption and latency issues for stereo vision processors, limiting the application of this technology in high-speed scenarios, small robots, and edge computing. In recent years, stereo vision processors integrated with dedicated hardware accelerators for stereo vision algorithms have attracted significant attention in academia and industry. This paper first systematically elaborates on the theoretical basis of stereo 3D vision and its application examples in robotic stereo vision, then introduces the composition structure of stereo vision processors, including core components such as image acquisition, camera calibration and correction, and stereo matching. For the convenience of hardware developers in stereo vision, this paper reviews the basic concepts, research status, and challenges based on the core composition structure of stereo vision systems, with a particular focus on comparing new hardware computing architectures.
Reference format for this article:Chen Zhuoyu, An Fengwei,et al. Overview of Stereo Vision Processors for Robot Navigation[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):15-28.
Mo Xiaorui1, Zhang Weiyi1, Nian Cheng1, Guo Yushi1, Niu Liting1, Zhang Baiwen2, Zhang Chun1
(1. Tsinghua University, School of Integrated Circuits; 2. Beijing Academy of Science and Technology, Institute of Information and Artificial Intelligence)
Abstract: In visual simultaneous localization and mapping (VSLAM) systems, Bundle Adjustment (BA) is an important step for optimizing camera parameters and the positions of 3D points. However, due to the high computational complexity of BA and the high real-time requirements, traditional computing platforms struggle to meet the demand for efficient computation. In recent years, the introduction of dedicated hardware accelerators has provided new solutions for BA optimization. This paper reviews the research status and development trends of dedicated chips for BA optimization, covering the application scenarios, definitions, and basic principles of BA algorithms; acceleration methods for BA on Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), and Graphics Processing Units (GPUs), as well as the development trends of these accelerators. Additionally, this paper discusses the challenges faced by BA accelerators in technical implementation and looks forward to their future development directions.
Reference format for this article:Mo Xiaorui, Zhang Weiyi, Nian Cheng, et al..Research Overview on Bundle Adjustment Optimization Chips in Visual SLAM Robots[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):29-40.
Wu Lizhou, Zhu Haozhe, Chen Chixiao
(Fudan University, Institute of Frontier Technologies for Chips and Systems)
Abstract: Neural Radiance Fields (NeRF) is an emerging method for reconstructing 3D scenes, and its application prospects in the field of robotics are highly regarded. NeRF learns 3D scene features through multi-layer perceptrons (MLP) to achieve high-fidelity image rendering and provides a foundation for robots’ navigation, localization, and perception in complex environments. Its core processes include ray sampling, feature extraction, and volume rendering, characterized by high computational demands and irregular memory access, which limit deployment on existing hardware platforms, especially on edge devices, necessitating the exploration of new hardware architectures and software-hardware co-optimization schemes. This paper systematically elaborates on the technical principles and algorithm evolution of NeRF, and discusses its performance bottlenecks on existing hardware devices. On this basis, it introduces the classic working principles of NeRF hardware accelerators, summarizing three main optimization directions: image similarity optimization, spatial sparsity optimization, and memory access optimization, and analyzes the commonalities and differences of various working technologies. Additionally, in conjunction with application scenarios such as SLAM and AIGC, it discusses the technical limitations and challenges faced by current NeRF accelerators in handling open scene tasks regarding scalability and storage constraints. Finally, it proposes suggestions for future development, aiming to inspire further applications and optimizations of NeRF hardware accelerators.
Reference format for this article:Wu Lizhou, Zhu Haozhe, Chen Chixiao,Research Overview on Hardware Accelerators for Neural Radiance Fields[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):41-50.
Qi Xiuyuan, Liu Ye, Hao Shuang, Zhou Jun
(University of Electronic Science and Technology of China, School of Information and Communication Engineering)
Abstract: With the continuous iteration and development of computer vision technology, intelligent applications and devices based on computer vision technology are playing an increasingly important role in people’s daily lives and work. Among them, visual simultaneous localization and mapping (SLAM) technology is widely used in fields such as robots, drones, and autonomous driving, which require visual SLAM technology to provide accurate positioning information for precise mapping and autonomous navigation functions. However, due to the characteristics of visual SLAM algorithms, which involve extremely high computation and data dependency, it is challenging to meet the real-time and low-power requirements of these edge applications when running on traditional hardware platforms (CPU or GPU), becoming a key factor limiting the widespread application of visual SLAM technology. To address this issue, this paper proposes a high-efficiency dedicated accelerator for visual SLAM based on an optimization strategy of algorithm and hardware co-design, aiming at the ORB feature extraction and matching algorithms, enhancing computing performance and energy efficiency through various hardware design techniques, including multi-level parallel computing technology based on data dependency decoupling, data storage technology based on multi-size buckets, and pixel-level symmetric-lightweight descriptor generation and orientation calculation strategies. The proposed visual SLAM accelerator was tested and verified on the Xilinx ZCU104. Compared to the accuracy of the ORB-SLAM2 algorithm, the accuracy of this accelerator is within 5%, with a frame rate increased to 108 fps, and compared to other hardware accelerators during the same period, the usage of lookup tables was reduced by 32.7%, and FF usage was reduced by 41.17%, while the frame rate improved by 1.4 times and 0.74 times.
Reference format for this article:Qi Xiuyuan, Liu Ye, Hao Shuang,et al. Design of High-Efficiency Visual SLAM Hardware Accelerators[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):51-59.
Gao Jinyang1, Fan Zhendong1, Bao Minjie1, Wang Ke1, Li Ruifeng1, Kang Peng2
(1. Harbin Institute of Technology, National Key Laboratory of Robot Technology and Systems; 2. Jianghuai Frontier Technology Collaborative Innovation Center)
Abstract: The combination of robots and artificial intelligence will lead a new technological revolution. Artificial neural networks have great potential in robotic perception. However, the increasing complexity of AI algorithms and the prominent energy efficiency bottleneck of general processors such as CPUs make traditional processing chips unable to effectively adapt to the inference computing tasks of large-scale neural networks. In recent years, AI chips for robots, with their high computing power and low power consumption characteristics, have become an ideal choice for deploying neural networks in robotic systems, attracting widespread attention. This paper investigates the current status of AI algorithms in robotic applications, summarizes the latest advancements in AI chip design technologies, proposes technical challenges and feasible technical routes, and discusses the technological trends and challenges in the design of AI chips for robots.
Reference format for this article:Gao Jinyang, Fan Zhendong, Bao Minjie,et al. Overview of AI Chip Design Technologies for Robots[J]. Integrated Circuits and Embedded Systems, 2024, 24(11):60-77.
Wang Chao is a double-appointed researcher and doctoral supervisor at the School of Optics and Electronics of Huazhong University of Science and Technology and the Wuhan National Research Center for Optoelectronics. His main research directions include ultra-low power integrated circuit design, new artificial intelligence processor chip design, and optoelectronic fusion intelligent sensor integrated circuit design. He has been funded by the Central University Basic Research Fund, National Natural Science Foundation (2 projects), National Key Projects, National Key R&D Program Projects, and major special projects for “bottleneck” technology in Wuhan, and has published more than 90 international academic papers in journals including JSSC, TCAS-I/II, TBioCAS, JETCAS, CAS-M, ISSCC, and A-SSCC. Dr. Wang is a senior member of IEEE, co-founder and current chairman of the IEEE CASS-EDS-SSCS Integrated Circuit Direction Wuhan Joint Branch, mentor of the IEEE CASS-EDS-SSCS Integrated Circuit Direction Huazhong University Student Branch, and has served as a technical committee (TPC) member for several international academic conferences, and has previously served as a guest editor (AE) for IEEE TBioCAS and associate editor (AE) for journals such as IEEE TCAS-I and IEEE CAS Magazine.
Further Reading:
1.Part-I of the EDA Research Column of “Integrated Circuits and Embedded Systems” Issue 1
2. Part-II of the EDA Research Column of “Integrated Circuits and Embedded Systems”
3. Research Column on Chiplet in “Integrated Circuits and Embedded Systems” Issue 2
4. Research Column on Aerospace Integrated Circuits in “Integrated Circuits and Embedded Systems” Issue 3
5. Research Column on CMOS Image Sensors in “Integrated Circuits and Embedded Systems” Issue 5
6.Research Column on Integrated Circuit Reliability in “Integrated Circuits and Embedded Systems” Issue 7
7.Research Column on Integrated Circuit Hardware Security in “Integrated Circuits and Embedded Systems” Issue 9
The journal “Integrated Circuits and Embedded Systems” is supervised by the Ministry of Industry and Information Technology and hosted by Beihang University, focusing on key issues in the integrated circuit field, aiming to provide a platform for experts and scholars in the industry to communicate and share, to provide important support for the independent cultivation of high-level talent, to promote deep integration of industry, academia, and research, and to jointly promote the steady development of China’s integrated circuit industry.
This journal publishes reviews and academic papers related to integrated circuits and embedded systems, covering circuit and system theory and technology, large-scale integrated circuit design and manufacturing, embedded and system optimization, chaos and nonlinear circuits, sensors and the Internet of Things, and signal and information processing systems.
Welcome to log in to the official website of this journal (www.jices.cn) for submissions
Email: [email protected]
Contact number: 010-82338009