Figure AI has successfully trained its humanoid robot, Figure 02 , to learn how to walk in a human-like manner. Using reinforcement learning ( RL ) processing, the robots learn through trial and error thousands of times.
Advances in humanoid robot locomotion Figure 02
To teach Figure O2 robots to walk naturally like humans, the company created a high-fidelity simulated environment in which thousands of virtual robots train in parallel under unique physical conditions. In just a few hours of simulation, years of data are generated, allowing the robot to develop natural locomotion with an impressive level of realism.
Each of these virtual robots faces diverse terrain and scenarios, including obstacles and changes in the dynamics of its actuators. These virtual training sessions allow Figure O2 to learn to adapt to complex situations, such as trips or slips, and to perform human-like movements: heel strike, arm swing, and leg synchronization.
Although a simulated robot is not identical to a real system, Figure AI has managed to overcome this problem by using domain randomization in simulations. This technique introduces variations in the physical parameters of virtual robots, allowing the neural network to generalize its behavior to different types of physical hardware without the need for additional adjustments.
In addition, high-frequency torque feedback is incorporated, allowing correction of possible actuator modeling errors and ensuring that the robot responds accurately to changes in terrain or external thrusts.
The engineering behind the nature walk
The company has designed the control system so that the robot can walk stably, while also emulating the stylistic movements that characterize human walking.
These movements include correct leg alignment , coordination between arm and leg movements, and heel strikes on the ground , features that make the robot move like a real person.
And to optimize these movements, the robot was equipped with a simulator that allows it to follow human reference trajectories, an approach that allows it to generate a more natural gait.
A “natural walk” from a robot. Source: Figure
Through reinforcement learning architecture, Figure AI has demonstrated that its technology can be replicated across a fleet of robots . By applying the same neural network system to multiple robots, they are able to learn autonomously and adapt to physical or environmental variations, without requiring manual modifications for each one.
Follow us on social media and don’t miss any of our posts!
YouTube LinkedIn Facebook Instagram X (Twitter) TikTok
Source and photo: Figure