When the New England Motor Press Association held its annual technology conference at MIT this May, Research Scientist Bryan Reimer showed the six levels of autonomy:
Currently, we have a lot of features and technology that help brake and steer the car under certain circumstances. That puts us at Level 1. There isn’t a car extant today that jumps to Level 2, where at least two simultaneous functions are wholly managed by the vehicle in specific scenarios. The only feature available today that completely manages a function is advanced cruise control, which can maintain the speed of a car down to zero in certain circumstances.
What most people think of as an “Autonomous Vehicle” is Level 5 — the Google car without a steering wheel that completely replaces the human driver. It’s something many people are interested in for those long, awful commutes in traffic.
In practical terms, though, we’re ages from getting there. Why? As security researchers at the University of Washington discovered, as smart as partially autonomous technology is, it’s a long ways from being smart enough to operate a vehicle in anything but perfect conditions, and it’s easy to throw that technology into turmoil. Sometimes with as little as a sticker placed on a stop sign.
The reason is that currently, autonomous vehicles rely on vision systems to operate. They have an object detector and a classifier. Think of it as a camera, and a database of things that camera might see, and an algorithm of actions to choose when it sees those things:
Octagonal sign with the letters “STOP” = Stop the car
Upright biped on sidewalk = Keep going
Upright biped in crosswalk = Stop the car
The problem is that the classifier is only as smart as the data loaded inside it. Miss one scenario and the classifier just finds whatever matches most closely and goes with it. The researchers at the University of Washington figured out that by placing a few stickers on a stop sign, they could easily trick the car into obeying a completely different command: In this case, instead of seeing a stop sign, 73.3 percent of the time, the car saw 45 MPH speed limit sign.
An article in the MIT Technology Review on Google’s “autonomous” car came to very similar conclusions. Google’s autonomous car would only function if “intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped.”
The article in the MIT Technology Review isn’t critical of Google’s work toward autonomous cars. What it cautions is the reaction from the mainstream press and the general public, which appears to be suggesting that self-driving cars are a foregone conclusion. “[T]he public seems to think that all of the technology issues are solved,” says Steven Shladover, a researcher at the University of California, Berkeley’s Institute of Transportation Studies. “But that is simply not the case.”
According to the MIT Technology Review, for the Google car to operate, an incredible amount of preparation and followup needs to happen. “Google often leaves the impression that, as a Google executive once wrote, the cars can ‘drive anywhere a car can legally drive.’ However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.”
It’s similar to the conclusions from 2014’s NEMPA/MIT Technology Conference . “Despite current headlines about Google’s self-driving cars, panelists repeatedly underscored that a fully autonomous car, technologically feasible as it may seem, remains far in the future because of the infinite variables in automating the driving experience. Many technological, legal and societal hurdles remain to be overcome.”
MIT’s Dr. Bryan Reimer noted that “[t]he law of probability says it’s 100 percent certain that someone is going to walk in front of one of these autonomous test cars, or one of these cars is going to kill someone.” He notes that the legal ramifications are just now beginning to be understood. “That’s going to set off a chain-of-custody situation. Who is responsible—the engineer, the manufacturer, the driver? It’ll be an ugly legal problem and an interesting area of law where there’s a lot of money to be made.”
Even if some Google executives are overly confident about the car’s abilities, the director of the Google Car team isn’t. Chris Urmson was quite pragmatic about how much testing the cars need to go through in his interview with the MIT Technology Review. He noted that safety concerns have precluded testing even during heavy rains, let alone snow and ice. According to the article, big, open parking lots and multilevel garages are still on the list for testing, too. “I could construct a construction zone that could befuddle the car,” Urmson says.
The work from the University of Washington focused on “robust physical world attacks on machine learning models.” You can be absolutely certain that these kind of attacks are not going to be rare. At Defcon 25 — the world’s longest-running and largest underground hacking convention, held at Caesar’s Palace in Las Vegas every July — an entire subsection of the convention was devoted to hacking cars. The Car Hacking Village at Defcon ” is a group of Professional and Hobbyist car hackers who work together to provide hands-on, interactive car hacking learning, talks, hardware, and interactive contests.”
Topics at this year’s Car Hacking Village included “Attacking Wireless Interfaces in Vehicles,” “Abusing Smart Cars with QR Codes” and “Insecure By Law” a chilling topic that discussed the many was over-the-road trucks are vulnerable due to arcane federal and state regulations.
Cars are vulnerable today, but the vulnerability increases exponentially when cars begin to climb into the higher levels of autonomy, which is obvious in the University of Washington’s example, and the automotive industry as a whole doesn’t seem to be doing much about it. A friend of BestRide attended Defcon 25 and he relayed the short version of a talk early in the conference: “Automakers need to wake the f*** up.”
And they’d better do it long before the send cars out that can drive themselves.