For years, researchers in artificial intelligence and robotics have strived to develop versatile embodied intelligences or agents that can perform tasks in the real world with the same agility and understanding as animals and humans.
Exploring the potential of trained mini soccer robots
A mini soccer robot equipped with artificial intelligence falls down, but quickly comes back to dribble and score. These robots, trained using deep reinforcement learning, exhibited unexpected behaviors during the match, such as turning and rotating, which are difficult to program in advance.
Researchers used a small humanoid robot trained with deep reinforcement learning to demonstrate agility and skill in a one-on-one soccer match. As published in Science Robotics, the robot demonstrated its maneuverability by walking, spinning, kicking, and quickly recovering after falling.
They learned to smoothly switch between different actions, predict the movement of the ball and defend their opponent's shots while playing strategically. Based on the discoveries of his DeepMind team at Google, deep reinforcement learning could provide a way to train basic and reliable behaviors in humanoid robots.
Advances in deep reinforcement learning are driving recent advances in this effort. While quadrupedal robots have demonstrated a variety of skills such as locomotion, object handling, and manipulation, humanoid robots have been more difficult to control.
This is primarily due to stability and hardware limitations, resulting in an emphasis on basic skills and reliance on model-based predictive control. The team adopted this method to train an affordable and easily accessible robot for multi-robot soccer, exceeding expected levels of agility.
Mini soccer robot training to increase agility and adaptability
They demonstrated the robot's ability to control movement based on sensory input in both simulated and real-world situations, focusing on a simplified one-on-one soccer scenario. Their training process consisted of two stages.
Initially, they trained the robots in two important skills: getting up after a fall and scoring against opponents. Then, through self-play, he trained a robot for a one-on-one soccer match, with opponents generated from copies of the partially trained robot.
They used rewards, randomization, and perturbations to encourage exploration and ensure safe performance in real-world settings. The resulting agent exhibits impressive and flexible movement skills, including rapid recovery from falls, walking, turning, and kicking, and can easily switch between these actions.
Also read: Deeper understanding of artificial intelligence: From machine learning to generative AI, large-scale language models, and more
After simulation training, the agent transitioned smoothly to a real-world robot. This successful transfer was facilitated by a combination of targeted dynamic randomization, perturbation during training, and high-frequency control.
In experimental matches, robots trained with deep reinforcement learning significantly improved their skills over baseline script-programmed robots. They walked 181 percent faster, changed direction 302 percent faster, kicked a ball 34 percent faster, and recovered from falls 63 percent faster.
The researchers also observed spontaneous movements, such as rotations and swivels, that are difficult to program manually. These findings suggest that deep reinforcement learning can effectively teach humanoid robots basic behaviors and pave the way for more complex behaviors in dynamic environments.
Future research may consider training teams of multiple agents, but initial experiments showed that agility is low despite the division of labor. Researchers also aim to train agents based solely on onboard sensors, creating challenges in interpreting egocentric camera observations without external data.
Related article: OpenAI invests heavily in AI-powered humanoid robots
ⓒ 2024 TECHTIMES.com All rights reserved. Please do not reproduce without permission.