Look at them go!
Bot v. Bot
A team of researchers from Google’s DeepMind AI lab have programmed a pair of little humanoid robots to play a classic match of one-versus-one soccer — and the results are absolutely adorable.
A video of these robotic little tykes shows them wobbling around with impressive, almost human-like agility, though the movements are more akin to an athletic toddler than a professional soccer player.
It’s all very endearing, but don’t let the cuteness of it overshadow the impressive skill on display. The robots are able to walk, turn, and run relatively efficiently, and transition between these movements with impressive fluidity.
Plus, they can also kick and shoot the ball at a goal, too, thanks to cutting-edge machine learning tech. But dribbling remains far beyond their capabilities. After all, little Lionel Messis they are not.
Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
investigated the application of Deep Reinforcement Learning (Deep RL) for low-cost, miniature humanoid hardware in a dynamic environment, showing the method can synthesize sophisticated and safe… pic.twitter.com/sMMaeCSro3
— AK (@_akhaliq) April 27, 2023
The Training Ground
Now, you may be eager to point out that the miniature bipeds aren’t quite on the level of, say, Boston Dynamics’ uncannily capable humanoids, and you’d be right.
But the point of this project, dubbed OP3 Soccer, is to demonstrate the effectiveness of a form of machine learning called “deep reinforcement learning” in low-cost hardware.
As such, the researchers used an affordable and respectably capable open-source robot platform, rather than designing a robot from scratch.
And clearly, their work is paying off. Not only are the artificial ankle biters agile, they also exhibit a strategic understanding of the game, the researchers said. They can mark the other player with the ball, defend their own goal, block shots, and most importantly, aim for the back of the opponent’s net.
According to the researcher’s not-yet-peer-reviewed paper, the robots were trained extensively through simulations, and used simple penalties and rewards to encourage more optimal behavior.
It was through the follow-up real-world training, though, that they really began to shine. The robust bots gradually learned to be even more efficient than in the simulations, walking 156 percent faster and taking 63 percent less time to stand up, the researchers said.
So beware, soccer superstars: the robots are coming after your jobs — just not anytime soon.
More on robots: $75,000 FDNY Robodog Goes to Work, Falls Over Almost Immediately