The debate surrounding autonomous driving currently centers on when the technology will become sufficiently advanced such that it can truly usher in an era where cars effectively drive themselves from point A to point B with little to no human interaction.

Interestingly, though, one aspect of autonomous driving that has scarcely been mentioned focuses on the potential danger associated with hackers successfully manipulating the underlying software of a self-driving car.

DON’T MISS: Leaked video shows a real iPhone 7 in action for the first time ever

While this type of hack may sound far-fetched at first glance, researchers at this year’s DEFCON Hacking Conference in Las Vegas plan to demonstrate how they were able to configure a series of sensors, lights and radios to trick a Tesla Model S’ Autopilot software into seeing objects that weren’t really there and even ignoring obstacles altogether. It goes without saying that the implications of this type of hack are extremely serious.

A captivating report from Wired lays out how researchers from the University of South Carolina and Zehjiang University managed to carry out what effectively amounts to an exceedingly dangerous Tesla hack.

Tesla’s autopilot detects the car’s surroundings three different ways: with radar, ultrasonic sensors, and cameras. The researchers attacked all of them, and found that only their radar attacks might have the potential to cause a high-speed collision. They used two pieces of radio equipment—a $90,000 signal generator from Keysight Technologies and a VDI frequency multiplier costing several hundred dollars more—to precisely jam the radio signals that the Tesla’s radar sensor, located under its front grill, bounces off of objects to determine their position. The researchers placed the equipment on a cart in front of the Tesla to simulate another vehicle. “When there’s jamming, the ‘car’ disappears, and there’s no warning,” Xu says.

What’s more, the researchers were able to use a much cheaper method to tinker with the Model S’ ultrasonic sensors which come into play when a Tesla is in Summon mode or is parking on its own. Using a tiny computer and an estimated $40 worth equipment, the researchers plan to show how they can trick a self-parking Tesla into completely missing an obstacle in its path.

Now is this research cause for concern just yet? Not quite. After all, the proof of concept a) involves incredibly expensive equipment and b) was performed on non-moving cars. It’s worth noting, however, that while there’s no indication that the aforementioned hacks can actually work on a Tesla moving at a high rate of speed, the researchers involved believe that it is possible.

Regardless,the broader implications here are incredibly serious, especially as automakers are moving at breakneck speed towards a future where self-driving cars become commonplace. That said, it’s worth noting that many of the attack vectors the researchers attempted yielded dead ends, but the discussion their research raises is nonetheless extremely important.

Tesla for what it’s worth issued a comment on the new research: “We appreciate the work Wenyuan and team put into researching potential attacks on sensors used in the Autopilot system. We have reviewed these results with Wenyuan’s team and have thus far not been able to reproduce any real-world cases that pose risk to Tesla drivers.”

While we’re used to and can live with software companies engaging in cat and mouse games with hackers, the stakes are obviously much higher when cars that weigh thousands of pounds and travel at high speeds are involved.

Make sure to hit the source link below for the full rundown on how researchers managed to hack a Tesla Model S.

View Comments