Walking, running, jumping… the humanoid robot has achieved a new breakthrough, enabling it to autonomously stand up! Recently, the Shanghai Artificial Intelligence Laboratory released the “world’s first humanoid robot autonomous standing control algorithm,” allowing the robot to stand up autonomously and steadily on various terrains and in different postures. How is this achieved? Let’s take a look ↓
New Breakthrough!
The Humanoid Robot Has Learned to Stand Up Autonomously
No matter if it is sitting, leaning, or lying down, and regardless of whether it is on grass, gravel, or a soft mat, this humanoid robot can quickly achieve autonomous standing like a human, with smooth and decisive movements.Standing up requires high demands on dynamics and balance, posing challenges in the coordination of the humanoid robot’s limbs, center of gravity adjustment, and balance control.To achieve standing up, previous methods in the industry often involved having the robot memorize a set of motion trajectories. However, once the environment changes, the robot often finds itself at a loss. To enable the robot to autonomously stand in various scenarios like a human, the team innovatively employed reinforcement learning methods in the standing up scenario of the humanoid robot.Huang Tao, a doctoral student at the Shanghai Artificial Intelligence Laboratory: First, we place the robot in a constructed simulation environment that simulates various terrains or scenarios it might encounter in the real world. Then, we let the robot start from scratch in this scene, allowing it to explore and learn like a child. We set some objective functions or design some goals for it to achieve certain states, and ultimately, it learns to stand up in various scenarios.From Virtual to RealityThe Simulation Platform Reduces Costs and Increases Efficiency for Humanoid Robot TrainingFor robots to become more agile and intelligent, they rely on a vast amount of high-quality data. How to collect data more efficiently and at a lower cost has become a key issue in the current development of the robotics industry.At the Shanghai Artificial Intelligence Laboratory, the team is adopting a new type of robot data collection training method. What innovations does this method have, and how will it help robots integrate into our production and daily lives? Click the video to find out↓Data is the key to the continuous evolution of robots. For every simple skill a robot acquires, it requires learning nearly a hundred pieces of high-quality data. In the past, most of this data came from the real world, such as real machine training and simulated remote operations. However, at the Shanghai Artificial Intelligence Laboratory, reporters observed that on a general embodied intelligent simulation platform built by the team, a robot was repeatedly practicing various skills in highly realistic warehouse, home, and office environments.On this platform, everything from spatial layout to the position and material of objects, even the intensity of light and human interactions, can be freely edited, effectively creating a comprehensive training ground for the robot, allowing for efficient collection of high-quality data in the digital world.
By reconstructing reality into a virtual world, allowing robots to train and collect data on the virtual platform, and then transferring the skills back to reality to be deployed on real robots, the team has established a virtual-real integrated embodied intelligence evolution technology route, shortening the training time and cost for robots in specific environments, while also enhancing their adaptability to different scenarios.Currently, robots trained through this technological path can accurately perform complex tasks such as plugging and unplugging, picking and packing, and real-time obstacle avoidance, significantly reducing the training and growth cycle.
Source: CCTV Channel 1 WeChat Official Account (ID: CCTV-channel1)
Based on “News Live” and CCTV News Client
If there are copyright issues, please contact us, and we will handle them promptly.
Follow us