Study finds security flaws in first- and next-gen LiDAR systems

Autonomous vehicle technology is vulnerable to road object spoofing and vanishing attacks.


LiDAR (Light Detection And Ranging) is one of the most innovative sensors in the past decade. The technology has proven to be useful in autonomous driving. It allows for precise long- and wide-range 3D sensing, which has enabled the rapid deployment of autonomous vehicles.

However, there are concerns about security, as LiDAR is not immune to malicious attacks. A University of California, Irvine-led research team has demonstrated the potentially hazardous vulnerabilities associated with the technology.

The team includes computer scientists and electrical engineers at the UCI and Japan’s Keio University. They have shown how to use lasers to fool LiDAR into “seeing” objects that are not present and missing those that are – deficiencies that can cause unwarranted and unsafe braking or collisions.

In a study, researchers investigated spoofing attacks on nine commercially available LiDAR systems and found that first-generation and even later-generation versions exhibit safety deficiencies.

“This is, to date, the most extensive investigation of LiDAR vulnerabilities ever conducted,” said Sato. “Through a combination of real-world testing and computer modeling, we were able to come up with 15 new findings to inform the design and manufacture of future autonomous vehicle systems.”

According to the researchers, LiDAR is a preferred navigation and sensing technology used in various autonomous vehicle systems such as Google’s Waymo and General Motors’ Cruise, and it is also an important component in consumer-operated models sold by companies like Volvo, Mercedes-Benz, and Huawei.

To test the effectiveness of LiDAR systems, the researchers conducted an attack known as “fake object injection” on first-generation LiDAR systems. In this attack, the LiDAR sensors are tricked into perceiving a pedestrian or the front of another car when there is nothing there. The LiDAR system then communicates the false hazard to the autonomous vehicle’s computer, which can trigger an unsafe behavior such as emergency braking.

“This chosen-pattern injection scenario works only on first-generation LiDAR systems; newer-generation versions employ timing randomization and pulse fingerprinting to combat this line of attack,” said Sato.

However, UCI and Keio University researchers have found another way to confuse next-generation LiDAR. They were able to conceal five existing cars from the LiDAR system’s sensors using a custom-designed laser and lens apparatus.

“The findings in this paper unveil unprecedentedly strong attack capabilities on LiDAR sensors, which can allow direct spoofing of fake cars and pedestrians and the vanishing of real cars in the AV’s eye. These can be used to directly trigger various unsafe AV driving behaviors such as emergency brakes and front collisions,” said senior co-author Qi Alfred Chen, UCI assistant professor of computer science.

The team behind the research on LiDAR security hopes that their study will raise awareness about the vulnerabilities of LiDAR and autonomous vehicles to laser attacks. They also hope to inspire the development of countermeasures to protect against such attacks.

In addition, the team plans to continue researching other aspects of AV security, such as communication and software vulnerabilities, to ensure the safety and security of autonomous vehicles in the future.


  1. Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro Yoshioka, Qi Alfred Chen. LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies.
Latest Updates