A four-legged robotic system capable of playing soccer on a variety of terrains

"DribbleBot" can maneuver a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics.

Share

It’s a familiar feeling if you’ve ever played soccer with a robot. The sun shines down on your face, and the smell of grass fills the air. You look around. A four-legged robot is rushing towards you, dribbling with determination.

Researchers at MIT’s Improbable Artificial Intelligence Lab have created a legged robotic system to dribble a soccer ball like a human.

“DribbleBot” uses onboard sensing and computing to traverse various natural terrains such as sand, gravel, mud, and snow and adapt to their varying impact on the ball’s motion.

Researchers have been working on programming robots to play soccer for a long time. However, the team wanted to automatically learn how to actuate the legs during dribbling to discover difficult-to-script skills for responding to diverse terrains such as snow, gravel, sand, grass, and pavement.

Four thousand robot versions are simulated in parallel in real-time, allowing data collection to be 4,000 times faster than with a single robot.

The robot starts without knowing how to dribble the ball. It receives positive reinforcement when it succeeds or negative reinforcement when it fails. So it’s attempting to determine the best sequence of forces to apply with its legs.

MIT Ph.D. student Gabe Margolis, who co-led the work with Yandong Ji, research assistant in the Improbable AI Lab, said, “One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior. Once we’ve designed that reward, it’s practice time for the robot: In real-time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

The team built a recovery controller into the bot’s system, which allowed it to navigate unfamiliar terrain and recover from falls. This controller allows the robot to get back up after a fall and switch back to its dribbling controller to keep pursuing the ball, assisting it in dealing with disruptions and terrains.

Pulkit Agrawal, an MIT professor, CSAIL principal investigator, and director of the Improbable AI Lab has developed algorithms for legged robots to provide autonomy in difficult and complex terrains that are currently beyond the capabilities of current robotic systems.

This includes navigating unfamiliar terrain and recovering from falls due to a recovery controller built into the system by the team.

They said, “Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrains that are currently beyond the reach of robotic systems.”

Since these variations in dynamics frequently have less of an impact on DribbleBot’s ability to traverse various terrains, the soccer test may be more sensitive to terrain variations than locomotion alone is. As Canadian professor Alan Mackworth first noted in a paper titled “On Seeing Robots,” presented at VI-92 in 1992, there is a deep fascination with robot quadrupeds and soccer. Later, at a workshop on “Grand Challenges in Artificial Intelligence,” organized by Japanese researchers, the topic of using soccer to advance science and technology was discussed.

A year later, the project was launched, and the Robot J-League and global excitement quickly ensued.

The team created a robot that combines locomotion and dexterous manipulation. It has sensors that detect the environment, actuators that move it, and a computer that converts sensor data into actions. The robot can feel the snow through its motor sensors when running on snow, but soccer is a more difficult feat than walking.

Ji said, “Past approaches simplify the dribbling problem, making a modeling assumption of flat, hard ground. The motion is also designed to be more static; the robot isn’t trying to run and manipulate the ball simultaneously. That’s where more difficult dynamics enter the control problem. We tackled this by extending recent advances that have enabled better outdoor locomotion into this compound task which combines aspects of locomotion and dexterous manipulation together.”

The robot has a set of sensors that allow it to perceive its surroundings, feel where it is, “understand” its position, and “see” some of its surroundings. It is equipped with actuators that allow it to apply forces and move itself and objects. The computer, or “brain,” sits between the sensors and actuators, tasked with converting sensor data into actions that will be applied via the motors.

When the robot runs on snow, it cannot see it but can feel it via its motor sensors. However, soccer is more difficult than walking, so the team used cameras on the robot’s head and body to provide a new sensory modality of vision and the new sensors.

Margolis said, “Our robot can go in the wild because it carries all its sensors, cameras, and compute on board. That required some innovations in getting the whole controller to fit onto this onboard computer. That’s one area where learning helps because we can run a lightweight neural network and train it to process noisy sensor data observed by the moving robot. This starkly contrasts with most robots today: Typically, a robot arm is mounted on a fixed base and sits on a workbench with a giant computer plugged right into it. Neither the computer nor the sensors are in the robotic arm! So, the whole thing is weighty, hard to move around.”

In addition to the new motor skill, the team used cameras on the robot’s head and body to create a new sensory modality of vision. DribbleBot is a wild robot because it has all its sensors, cameras, and computing on board.

The controller needs to be trained in simulated environments with slopes or stairs so that the robot can estimate its material contact properties, such as friction. The team wants to explore this further in the future. For example, if there is a step up, the robot will become stuck and be unable to lift the ball over the step.

The researchers are excited to apply what they learned while developing DribbleBot to other tasks that require combined locomotion and object manipulation, such as quickly transporting various objects from one location to another.

- Advertisement -

Latest Updates

Trending