Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk

The software inside the Uber self-driving SUV that killed an Arizona woman last year was not designed to detect pedestrians outside of a crosswalk, according to new documents released as part of a federal investigation into the incident. That’s the most damning revelation offered up in a trove of new documents related to the crash, but other details indicate that, in a variety of ways, Uber’s self-driving car work failed to consider how humans actually operate.

On Tuesday, the National Transportation Safety Board, an independent government safety panel that more often probes airplane crashes and large truck incidents, posted documents related to its 20-month investigation into the Uber crash. The panel will release a final report on the incident in two weeks. More than 40 of the documents, spanning hundreds of pages, dive into the particulars of the March 18, 2016 incident, in which the Uber testing vehicle, operated by 44-year-old Rafaela Vasquez, killed a 49-year-old woman named Elaine Herzberg as she crossed a darkened road in the city of Tempe, Arizona. At the time, only one driver monitored the experimental car’s operation and software as it drove around Arizona. Video footage published in the weeks after the crash showed Vasquez react with shock in the moments just before the collision.

The new documents indicate that some mistakes were clearly related to Uber’s internal structure, what experts call “safety culture.” For one, the self-driving program didn’t include an operational safety division or safety manager.

Ask WIRED

The most glaring mistakes were software-related. Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. Uber engineers also appear to have been so worried about false alarms that they built in an automated one-second delay between a crash detection and action. And Uber chose to turn off a built-in Volvo braking system that the automaker later concluded might have dramatically reduced the speed at which the car hit Herzberg, or perhaps avoided the collision altogether. (Experts say the decision to turn off the Volvo system while Uber’s software did its work did make technical sense, because it would be unsafe for the car to have two software “masters.”)

Much of that explains why, despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s “Vehicle Automation Report” reads. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.

That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

Go to Source