A new algorithm can accurately predict a person’s trajectory, in real-time

Helping robots (opens in new window) to work seamlessly alongside people.


MIT researchers developed an algorithm that tells robots where the humans around them are headed. The trajectory-prediction tool may help humans and robots work together in a factory or home settings.

The new algorithm accurately aligns partial trajectories in real time. It allows the motion predictors to accurately anticipate the timing of a person’s motion.

When the team applied the new algorithm to the BMW factory floor experiments, they found that, instead of freezing in place, the robot simply rolled on and was safely out of the way by the time the person walked by again.

This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which is a core part of human motion,” says Julie Shah, associate professor of aeronautics and astronautics at MIT. “This technique is one of the many ways we’re working on robots better understanding people.

Back in 2018, in response to the factory floor setting, the team of MIT and auto manufacturer BMW built a robot in rails, designed to deliver parts between work stations. The robot was programmed to stop momentarily if a person passed by. However, it was found that the robot would often freeze in place, overly cautious, long before a person had crossed its path. And if it happened in a real manufacturing setting, such unnecessary pauses could accumulate into significant inefficiencies.

The robot’s trajectory alignment algorithms could reasonably predict where a person was headed, but it couldn’t anticipate how long that person spent at any point along their predicted path.

The solution:

The new algorithm is inspired by the algorithms of music and speech processing, which are designed to align two complete time series or sets of related data. Researchers used a similar alignment algorithm to sync up real-time, and previously recorded measurements of human motion, to predict where a person will be.

However, human motion can be messy and highly variable, when compared to music or speech. Also, one person can move a bit differently each time while doing the same task repeatedly.

The project lead and graduate student, Przemyslaw “Pem” Lasota says that a solely distance-based algorithm (like the existing one) can get easily confused in certain common situations.

New “partial trajectory” algorithm aligns segments of a person’s trajectory in real-time with a library of previously collected reference trajectories. Most importantly, it aligns trajectories in both distance and timing, and in so doing, is able to accurately anticipate stops and overlaps in a person’s path.

Say you’ve executed this much of motion,” Lasota explains. “Old techniques will say, ‘this is the closest point on this representative trajectory for that motion.’ But since you only completed this much of it in a short amount of time, the timing part of the algorithm will say, ‘based on the timing, it’s unlikely that you’re already on your way back, because you just started your motion.’


The team tested the algorithm on two human motion datasets. In the first one, a person intermittently crossed a robot’s path in a factory setting. And another in which the group previously recorded hand movements of participants reaching across a table to install a bolt that a robot would then secure by brushing sealant on the bolt.

What’s important is, the teams’ algorithm was able to make better estimates of a person’s progress through a trajectory for both the datasets compared with two commonly used partial trajectory alignment algorithms.

Furthermore, the team found that when they integrated the alignment algorithm with their motion predictors, the robot could more accurately anticipate the timing of a person’s motion. Also, they found that the robot was less prone to freezing in place in the factory floor scenario. In fact, it smoothly resumed its task shortly after a person crossed its path.

Shah says the algorithm will be a key tool in enabling robots to recognize and respond to patterns of human movements and behavior. Ultimately, this can help humans and robots work together in structured environments, such as factory settings and even, in some cases, the home.

This technique could apply to any environment where humans exhibit typical patterns of behavior,” Shah says. “The key is that the [robotic] system can observe patterns that occur over and over so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion, to be able to collaborate with us better.

This research was funded, in part, by a NASA Space Technology Research Fellowship and the National Science Foundation.

Shah and her colleagues will present their results this month at the Robotics: Science and Systems conference in Germany.

- Advertisement -

Latest Updates