Waymo Autonomous Car Update: Seeking Perfection In An Imperfect World

August 30th, 2018 by  


Google began its quest to develop autonomous cars nearly 15 years ago when it decided to participate in the first DARPA Grand Challenge in 2004. DARPA itself has been involved in autonomous vehicle research since 1966. Two years ago, the Google self-driving division morphed into Waymo with expectations that it would become a leading provider of autonomous ride hailing services, a field UBS estimates may be worth nearly $3 trillion a year worldwide by 2030. Based on that projection, it values Waymo at a staggering $175 billion today.

Waymo autonomous van

For several years, Google experimented with a cute little 2 seater bubble car with no steering wheel and no pedals. It had just a touchscreen to type in a destination and a big green GO button. But Larry Page, Google co-founder, decreed the company’s first autonomous cars needed conventional controls, which led to a link-up with Chrysler. Today, Waymo has a fleet of Level 4 autonomous Pacifica Hybrid vans based in Chandler, Arizona, a suburb of Phoenix.

Waymo Early Rider Program

That fleet is now providing free rides to a select group of people — mostly students and Valley Metro employees — who have signed up for its Early Rider program. If you are expecting to hear from these people about their experiences, don’t hold your breath. All have signed non-disclosure agreements that prevent them from talking to the press.

Almost all of the Early Rider trips are performed with a human “chauffeur” riding along to take over control of the vehicle if necessary. There are some conflicting reports about how often they are required to intervene. On some routes where traffic is light and driving challenges are few, the journeys are performed with no human driver on board.

It’s All About Safety

The whole reason behind the autonomous car is to make driving safer. Computers never get tired or bored. They notice everything going on around them. They don’t drink, forget to take their meds, use recreational drugs, or have road rage. They are perfect drivers, and that drives some of the people in Chandler crazy.

The Waymo vans wait a full three seconds at every stop sign even if there is no cross traffic. They politely wait for an opening in incoming traffic before turning left. Sometimes minutes go by before they complete the turn, causing steam to come out of the ears of drivers behind. Arizona uses a system of red and green lights on highway entrance ramps designed to help drivers merge into the flow of traffic. The Waymo vans don’t really understand how those lights work and so they wait. And wait. And wait some more.

One person familiar with the Waymo program in Phoenix tells The Information, “It’s still a student driver, but it’s the best student driver out there.” The vans are programmed to strictly adhere to every rule of the road. Another person reports there have been internal discussions about loosening up the parameters to allow the cars to mimic human behavior but those suggestions have been nixed. The consensus has become, “It doesn’t matter how a human would do it. We would need to do it like a perfect driver.”

How Much Better Is Acceptable?

The issue is, how much better than a human driver does an autonomous car have to be before it will be accepted by society? Some argue a 50% reduction in traffic fatalities would be a huge boon to humanity but others feel the bar has to set much higher than that. Ammon Shashua, CEO of MobilEye, told CNET earlier this year, if there are 40,000 traffic fatalities in the US today, he thinks the public would tolerate only 40 from self driving cars, a reduction of 99%.

Despite the need for safety, his company has developed its own algorithms for driving in Jerusalem where traffic etiquette tolerates aggressive driving. Its system, called Responsibility Sensitive Safety, permits a car to shoulder its way into a fast moving stream of traffic, forcing a gap between cars if necessary. The folks at Waymo would probably be aghast as such boorish behavior. You can see the system in action in the the video below.

Perfect Robots Vs. Imperfect Humans

The problem with perfect drivers is they do not interface well with imperfect drivers. Just as bicyclists clash with motorists and pedestrians clash with bicyclists, it can be argued that the best way to bring autonomous mobility devices to urban areas is to create dedicated lanes that keep machines and humans at a safe distance from each other, a solution that would be prohibitively expensive and politically unpopular.

A dozen people who work in the area near the Waymo transportation hub in Chandler all tell The Information they hate being around the Waymo vans because they often take what seems like minutes to make driving decisions human drivers do in seconds. And if they get confused, they stop dead in the middle of the road, causing drivers behind them to stop unexpectedly. Following a Waymo van can be as joyless as being stuck behind a school bus.

Some drivers admit they resort to illegal maneuvers to get around a Waymo vehicle. Others say they shout, curse, or give the vans the international salute as they blast by. There is no telling how the computer interprets all those forms of undignified behavior.

Taking It To The Next Level

Level 2 autonomy is easy. Level 4 autonomy is hard. It’s like the difference between checkers and three dimensional chess. A Waymo spokesman says its vehicles are “continually learning, and we’ve developed robust testing and validation processes that will allow us to safely expand our vehicle’s driverless capabilities over time.” How much time? Years, seems to be the best available answer.

Although programs like Early Rider in Chandler help software engineers refine the algorithms that control the cars — and everyone seems to agree they are better today than they were last fall when they were first introduced to the area — one big problem that remains is predicting human behavior, especially bicycle riders and groups of pedestrians.

Gil Pratt, head of Toyota Research Institute, told Amir Efrati of The Information recently, “The trouble with self-driving cars is people.” Efrati elaborates: “Today’s AI doesn’t know that a mother holding a child likely won’t cross the street illegally but two teenagers might.”

Slow And Steady

The bottom line is that self-driving technology is like battery research — progress is slow and often painful. One day, autonomous cars will zip around and no one will even notice them. But that day is probably a little further in the distance then some would like to believe.

It takes a teenager years to learn how to handle every situation that comes along on public roads. There is no reason to expect computers and robots will take any less time. In June, John Krafcik, CEO of Waymo, told the National Governors Association the time period to make automated vehicles widespread “will be longer than you think.”

How many US fatalities will society accept from self driving cars? 40, as Ammon Shashua suggests? 400? 4000? Any number would be a huge improvement over today’s experience, but just one in Tempe, Arizona, last spring nearly shut down autonomous driving research in America. The odds are Shashua is closest to the correct answer. Getting to that level of self-driving competence will involve years of testing and millions if not billions of miles of driving experience. Hope for the future but don’t hold your breath.




Tags: , , , , , ,


About the Author

Steve writes about the interface between technology and sustainability from his home in Rhode Island and anywhere else the Singularity may take him. His muse is Charles Kuralt — “I see the road ahead is turning. I wonder what’s around the bend?” You can follow him on Google + and on Twitter.



Go to Source
Go to Source