A typical driving test considers basic skills: Can you parallel park? Do you merge safely? Do you know to yield to pedestrians?
No such government exam is required for cars driven by a computer. The idea has been dismissed by federal officials who oppose regulation and industry leaders who say they need freedom from rules to innovate.
But a new study by the Rand Corp., funded by Uber’s autonomous vehicle division and released Thursday, tries to map out what independent tests of driverless safety might look like and how they might be implemented.
One key element, the authors say, would be trying to define and gauge “roadmanship,” a 21st-century riff on good citizenship and driving behavior by robotic cars.
Among the things that might be measured: How much space does the driverless car leave between itself and cars to the front and sides? How much more cautious does it become when its sensors are obscured or sightlines are bad? How often does it jerk out of the way of another car or slam on the brakes, either because of its own shortcomings or the poor driving of others nearby?
[Will driverless cars really save millions of lives? Lack of data makes it hard to know.]
“We are showing the art of the possible,” said Marjory Blumenthal, a senior policy analyst at Rand. “There are many instances in which it is possible to have a safety-relevant measure. What makes sense will vary with the circumstances. But the world offers a lot more” than what is done today.
The study is meant to “motivate conversation within industry and within government, and between industry and government, to see if people will move toward a higher level,” said Blumenthal, who headed a science and technology council that advised the Obama administration.
The research was launched before a driverless Uber — a retrofitted Volvo SUV — struck and killed a pedestrian in Tempe, Ariz., in March. The crash, which remains under investigation, heightened public concerns about the trustworthiness of the technology and the speed with which some firms are pushing its deployment on public roads.
Noah Zych, Uber’s head of system safety, said the Arizona death led the company “to deeply reflect upon what had gotten us to that point and what improvements we could make to our development approach.”
Zych said company employees feel a “responsibility and obligation to try to share our lessons learned from that as broadly as we can . . . so that we can help move everybody forward and hopefully prevent those kinds of incidents from happening, not just for ourselves in the future, but for everyone else.”
[Backup driver in fatal self-driving Uber crash was streaming Hulu]
He said he was pleased other top companies proved willing to discuss safety measurement issues with Rand, including in a workshop where a number of driverless developers exchanged ideas.
Rand cited the participation, either in the workshop or through individual comments, of Cruise Automation, Waymo, Intel, Tesla, the Toyota Research Institute, Baidu and others.
“I do think there’s an appetite for that kind of collaboration, recognizing that incidents that occur for any one company affect all companies and affect the industry as a whole,” Zych said.
Competitive considerations have hampered earlier federal calls for disclosures and data sharing among driverless companies to promote safety, and it’s unclear how much that might change. The authors redoubled calls for such sharing between companies and with the government.
The Rand study notes that roadways have become “a living laboratory” and human drivers, passengers and pedestrians have been made subjects in “a study that they did not consent to take part in and cannot opt out of.”
Wrestling with this and other realities will shape public acceptance of, or hostility toward, the technology, the authors argue.
[Trump administration pushing to ease rollout of driverless cars and trucks]
Technology developers believe “some exposure to risk and uncertainty about this risk must be accepted” in the near term to reap safety benefits in the long term, the authors note. Those in the safety advocacy community, meanwhile, champion “clearer communication about risk and more-conservative efforts to at least minimize risk and preferably eliminate it,” the report notes.
Bridging that gulf will require finding common agreement on how to measure safety across the industry, the authors said.
They recommend that companies find more ways to share information about safety during various stages of their work. Such a setup would allow for the comparison of one company’s simulations or driving scenarios with those from other companies, perhaps under the auspices of “a third party or department of motor vehicles.”
They also recommend new, formal descriptions as part of one of the most central and wonky-sounding concepts in driverless development: the operational design domain. That’s the idea that driverless cars are designed to work only in certain circumstances, such as during the day in good weather on downtown streets in a particular city. There should be a clear “taxonomy” for describing “where, when and under what circumstances” an autonomous vehicle can operate safely, so there’s no ambiguity among officials or the public, they said.
Federal policy, updated by transportation officials last week, continues to rely on companies submitting voluntary safety assessments to the Transportation Department describing why they believe their vehicles are safe and where they are meant to go. So far, just four of the dozens of companies active in the field have made the voluntary safety reports public.
California has required companies to regularly report the number of “disengagements” by autonomous cars rolling on its roads — that is, times when a human must take back control from the self-driving system. That measure reflects the rare case where driverless developers are forced to reveal any data at all. But the limitations of that tool have also been emphasized, including by the Rand authors, who note it’s not always clear what the numbers actually tell you. For example, more rigorous testing could lead to more disengagements — but also safer cars.
Uber said it will soon make its own safety assessment public, which will give a deeper view into its thinking about safety. Zych said Uber can’t discuss specific findings on the Tempe fatality because it is party to the ongoing National Transportation Safety Board investigation.
Zych says the push for new safety measurements is also an ongoing priority. In recent months, for example, engineers have sought to recast pass-fail tests on its test tracks to get repeatable and more nuanced results showing whether the cars began decelerating sooner or changing lanes ahead of time in particular scenarios.
Engineers are also continuing to explore ways to measure “roadmanship.”
“They’re the sort of leading indicators and warning signs that a system is performing in a trustworthy way, or isn’t performing in a trustworthy way, before you get to the stage of seeing crashes” or other problems, Zych said.
“These are the kinds of things that generally feel like good behavior when you’re in a car with someone else — if they’re driving smoothly, not reacting last-minute to things . . . If they’re exhibiting the characteristics of defensive driving, where they’re slowing down when there is a potential conflict,” Zych said. “Turning what that feels like into measures that can apply to human-driven vehicles and self-driving vehicles is a really exciting area.”