Model helps robots navigate more like humans do

In simulations, robots move through new environments by exploring, observing, and drawing from learned experiences.

MIT researchers have devised a way to help robots navigate environments more like humans do.
MIT researchers have devised a way to help robots navigate environments more like humans do.

MIT scientists have developed a new method that aids robots navigate as like as humans. The model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations.

The model is embedded in motion-planning algorithms that will help create a tree of possible decisions that branches out until it finds good paths for navigation.

A robot that requires to explore a space to achieve a door, for example, will make a well ordered inquiry tree of conceivable developments and afterward execute the best way to the door, thinking about different limitations.

Co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory said, “Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents. The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”

The specialists built up a model that consolidates an arranging calculation with a neural system that figures out how to perceive ways that could prompt the best result and uses that information to manage the robot’s development in an environment.

There are two advantages to the model:

  • Navigating through challenging rooms with traps and narrow passages.
  • Navigating areas while avoiding collisions with other agents.

Yen-Ling Kuo, a Ph.D. in CSAIL and first author on the paper said, “When humans interact with the world, we see an object we’ve interacted with before or are in some location we’ve been to before, so we know how we’re going to act. The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”

Scientists tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.

Brabu said, “Situations like roundabouts are hard because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on. You eventually discover your first action was wrong because later on, it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”

Results show that the specialists’ model can capture enough data about the future conduct of alternate agents (cars) to remove the procedure early, while as yet using good decisions in route. This makes arranging more effective. Also, they just expected to prepare the model on a couple of precedents of roundabouts with just a couple of cars.

Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments.

A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).