{props.credit}
)
}
const containers = document.querySelectorAll(“.w-full figure.w-full”);
containers.forEach(container => {const root = ReactDOM.createRoot(container);
root.render(
]]>
In 2015, the then-lead of Google’s self-driving car project Chris Urmson said one of his goals in developing a fully driverless vehicle was to make sure that his 11-year-old son would never need a driver’s license.
The subtext was that in five years, when Urmson’s son turned 16, self-driving cars would be so ubiquitous, and the technology would be so superior to human driving, that his teenage son would have no need nor desire to learn to drive himself.
Well, it’s 2024, and Urmson’s son is now 20 years old. Any bets on whether he got that driver’s license?
One of the hallmarks of the race to develop autonomous vehicles has been wildly optimistic predictions about when they’ll be ready for daily use. The landscape is positively littered with missed deadlines.
In 2015, Baidu senior VP Wang Jing said the tech company would be selling self-driving cars to Chinese customers by 2020. In 2016, then-Lyft president John Zimmer claimed that “a majority” of the trips taking place on its ride-sharing network would be in fully driverless cars “within five years.” That same year, Business Insider said that 10 million autonomous vehicles would be on the road by 2020.
GM said it would mass produce driverless cars without steering wheels or pedals by 2019. Ford, slightly more conservative, predicted it would do the same in 2021. And in a perfect encapsulation of mid-2010s autonomy hype, Intel in 2017 predicted a $7 trillion industry — more than double what the global auto industry does now — around autonomy by 2050.
Of course, no one has been more bullish than Tesla CEO Elon Musk, who has turned making wrong predictions about the readiness of autonomous vehicles into an art form. “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware,” Musk said in 2019. Tesla’s Full Self-Driving (FSD) feature would be so reliable the driver could “go to sleep.” Teslas with the company’s FSD software are not autonomous, and drivers would be well advised to not sleep in their cars.
Sure, there are some self-driving cars on the roads today. They’re in San Francisco, Phoenix, Los Angeles, Hamburg, and Beijing, among other cities. They’re operated by some of the biggest, most well-capitalized companies in the world. You can even ride in some of them.
But they’re stuck. Not stuck in the sense that a Tesla Cybertruck gets stuck in less than an inch of snow. But confined within geofenced service areas, held back by their own technological shortcomings, opposed by labor unions and supporters of more reliable modes of transportation, and restricted from driving on certain roads or in certain weather conditions.
“The autonomous vehicle industry — particularly the companies developing and testing robotaxis — has gotten away for too long with selling a vision of the future that they should know perfectly well is never going to come to pass,” Sam Anthony, co-founder and CTO of Perceptive Automata, a now-defunct AV company, wrote in his newsletter in 2022.
We assumed the robots would be able to drive as freely as we do. After all, we built a world in which we humans can — and do — drive anywhere, all the time. So why did we get it so wrong?
Before we examine why the industry collectively whiffed the rollout of driverless cars, it’s instructive to look at why these predictions were made in the first place. Why set these goal posts if they never really mattered?
Of course the answer is money. By promising that driverless cars were “just around the corner,” and on the cusp of taking over our roads, companies were able to rack in hundreds of billions of dollars to fund their experiments.
The amount of money flowing into the autonomous vehicle space also had the knock-on effect of convincing regulators to take a lax approach when it comes to self-driving cars. AV boosters warned that too many rules would “stifle innovation” and jeopardize future gains, whether that was safety or job creation.
And it turns out that regulators were very receptive to those arguments. The federal government — whether under Obama, Trump, or Biden — has done very little to stand in the way of companies testing their tech on public roads. A bill in Congress that would accelerate the rollout of cars with steering wheels and pedals has stalled over disagreements about liability, but you wouldn’t know it looking at these fundraising hauls.
Some states, like California, have done their best to spin up some sort of regulatory playbook. But most were eager to attract companies under the belief that driverless cars were the future. And who wants to stand in the way of the future?
For nearly a decade, AV operators were able to raise money almost without restriction. They did it through normal fundraising channels, or by tying themselves to big tech and car companies. Cruise Automation was acquired by General Motors. Ford invested $1 billion in Argo AI. Google, always slightly ahead of the rest, spun out its self-driving car project as Waymo. Amazon bought Zoox. Hyundai allied itself with Motional. Some have estimated over $160 billion has flowed into the industry over the past dozen or so years.
And after the pandemic, the companies that weren’t able to cozy up to big automakers or tech giants found a new way to quickly raise cash: SPACs. Traditional IPOs were slow, and special acquisition companies were quick, so dozens of mobility-focused startups went public by merging with these so-called “blank check” companies in order to access more money faster.
And despite a number of setbacks, like crashes and lawsuits and investigations, the cash kept coming in. It wasn’t until 2021, when the industry pulled in $12.5 billion led by GM’s Cruise raising a massive $2.75 billion, that funding for AV companies peaked.
The predictions about the imminent arrival of safe, reliable self-driving technology helped speed the flow of money. And once those predictions failed to materialize, the money started to dry up.
Why did the predictions fail? The technology, while incredibly effective at getting us most of the way there, stumbled as it got closer to the finish line.
In the AV world, this is called the “long tail of 9s.” It’s the idea that you can get a vehicle that is 99.9 percent as good as a human driver, but you never actually get to 100 percent. And that’s because of edge cases, these unpredictable events that flummox even human drivers.
When training an AI program on driving, you can predict a lot of what to expect, but you can’t predict everything. And when those edge cases eventually emerge, the car can make mistakes — sometimes with tragic consequences.
Take the example of Cruise. In October of last year, a woman was hit by a human driver while crossing the street in San Francisco. The impact sent her flying into the path of a driverless Cruise vehicle, which immediately braked after also striking her. The Cruise vehicle then attempted to pull over to the side of the road, not realizing the woman was still trapped beneath the vehicle, injuring her further in the process.
One of the first things Cruise did in the wake of the incident was to recall all 950 vehicles it had on the road in the US. This took the form of an over-the-air software update to the collision detection subsystem so the vehicle remains stationary during certain crash incidents, rather than pulling over to the side of the road. Cruise encountered an edge case, and it quickly issued a correction for it.
But how many more edge cases are lurking in the shadows? And how many more people will be injured — or even killed — before these cars are seen as more reliable?
Waymo has been on the forefront of trying to convince the public and regulators that its vehicles are as safe, if not safer, than humans. It’s released a number of studies and statistical analyses in recent years that it says proves its vehicles get in fewer crashes, cause less damage, and improve overall safety on the roads.
But for every Waymo, there’s an Elon Musk, whose misleading predictions about the imminent readiness of self-driving cars muddy the waters for everyone else who knows that the reality is much further away than previously thought. Waymo also assumes legal liability for crashes involving its vehicles — something Tesla has so far refused to do.
But Waymo isn’t driving the public’s perception of self-driving cars; Tesla is. Broken promises and failed predictions are what’s fueling the growing skepticism about self-driving cars in the public which, as the years plod by, gets more and more turned off by the idea of relinquishing control of their vehicles to a robot.
Without passengers, there’s no business. But without safe, reliable technology, there’s no future for autonomous vehicles.