The Traveling Wilburys were a short-lived phenomenon. From 1988 to 1991, Bob Dylan, George Harrison, Jeff Lynne, Roy Orbison, and Tom Petty—each a star in their own right and with a robust catalog to their name—combined their talents and experiences to produce two albums. That’s 21 songs in 112 delightful minutes of music, a testament to the power of collaboration.
Just about a decade into the race to develop self-driving cars, this young industry has its own supergroup: Aurora Innovation, formed by three of the biggest names in the field and veterans of its highest-profile efforts. At the end of 2016, Chris Urmson, Drew Bagnell, and Sterling Anderson created the startup to deliver fully self-driving technology—no human involvement—and will start with operations in geofenced areas (somewhere), slowly expanding as the cars prove themselves.
The trio’s experience runs deep. After helping lead Carnegie Mellon’s efforts in Darpa’s Grand Challenges, Urmson became a founding member of Google’s self-driving team, which he ran until 2016. Anderson worked on the tech at MIT before bringing his talents to bear on Tesla’s Autopilot system. Bagnell, another CMU alum, is a machine learning expert who helped build Uber’s autonomy effort.
They entered a self-driving industry big on promises. Waymo (which started as Google’s project) says it will deploy its cars in a commercial service by the end of this year. General Motors is targeting 2019. Zoox, a secretive startup that has raised $800 million, is looking at 2020. Ford has promised large fleets of autonomous vehicles come 2021 (though it hasn’t brought that up since it swapped out CEOs last year).
You might expect Aurora’s founders, then, to throw their cumulative experience into an ambitious effort to outrace these more established programs to market, one of those together we can rule the galaxy-type deals. Instead, the zeitgeist at Aurora is one of humility. Urmson, Bagnell, and Sterling haven’t put any hard dates on when it when their tech might be ready. They don’t pitch a grandiose vision of a remade world of mobility. They seem to seek a role as a Tier 1 supplier, selling self-driving tech to automakers the way others sell airbags.
That’s easier to understand when you take a closer look at their resumés. Waymo has covered nine million miles but reportedly still has trouble with left turns into traffic. Tesla’s system has attracted the wary eye of the National Transportation Safety Board. Uber’s car killed a woman in March. After years of hype, the difficulty of making self-driving technology really work seems to have set in.
“I think there’s a lot of people who underappreciate the subtlety and complexity of the problem,” Urmson says. And while he’s never been the boastful type, it’s quite a change from 2015, when he said his goal was to make sure his 11-year-old son would never need a driver’s license—an objective he hasn’t brought up lately.
Aurora hasn’t made much noise in general since starting work in January 2017, apart from announcing partnerships with Volkswagen, Hyundai, and Chinese startup Byton, and raising an impressive but hardly stunning $90 million in funding. But now that it’s looking build up its team (currently about 160-strong), it has published a blog post laying out its approach to robo-driving.
WIRED sat down with Urmson, Aurora’s CEO, to go over its key points—including the role of machine learning, measuring progress, and proving safety—and how he and his cofounders are handing their second lap around this track.
Teaching the Machine
In developing this technology, it’s tempting to fall into what Urmson calls “ladder building.” For example, if you’re working on bringing the car to a stop, you want to keep making it smoother and smoother. “You can imagine people spending years making slight changes to the algorithm, tuning the parameters,” Urmson says, making clear he’s speaking from experience. “You feel like you’re making progress. It’s like Wile E. Coyote—your legs are moving real fast, but you’re not actually getting anywhere.”
With the chance to start fresh, Aurora is applying machine learning to this problem, which means finding the right way to teach a computer what a good stop looks like. They call this “fueling the rocket.” The results are harder to see than all those new rungs, but once you’ve finished, you can go a lot higher, a lot faster. The flip side is knowing where machine learning isn’t especially helpful, one upside of what Urmson says is his team’s ability to say “We’ve been down this road. That looks really appealing, but it’s not actually gonna get us there. Let’s do this.”
Machine learning is the right tool for teaching a robot to discriminate between an NBA player and an inflatable dancing man. But if you want to track how that person’s moving, you can fall back on advanced but well understood math. “That’s a very well established field,” Urmson says, thanks to people developing things like ballistic missiles and anti-aircraft weaponry. “If you can come up with a good measure of the error, we can carry that through the math, and get you a really nice, precise output.”
Measuring Progress
Today, Aurora’s cars are driving around Palo Alto and Pittsburgh (the company has offices in each city, as well as one in San Francisco). In the next few months, Urmson says they should be nearly “feature complete”—capable of doing everything a human driver can, if with less skill. After that, he says, it’s a matter of improving each ability.
Urmson’s no fan of the two standard ways of measuring progress: how many miles the cars have driven, and how often their human safety drivers have to take control. “How good are we at seeing traffic lights, or left turn arrows? That’s what we’re looking at for measurement,” he says. “We care about how close we are on each of those features.”
Proving Safety
One of many looming questions in this space is how to prove to wary regulators that self-driving cars are safe enough to deploy en masse. There’s no real mechanism for doing this—and the particulars will change from city to state to country—but Aurora has a plan in mind.
Urmson breaks the problem into two parts. The first is what happens when something breaks. First, you enumerate potential failure cases—sensors that can break, computers than may crash. Then, you lay out a fix, or response, for each. The car will pull over, it will activate backup systems, it will tell an adult, and so on.
The second bit is ensuring that when everything’s working, it’s working well enough. “That starts to look like a statistical argument,” Urmson says. Something like, We’ve driven by a million pedestrians, and we saw a million of them, or We’ve nailed 2,347,861 left-hand turns. Combined, these form an estimate of how often the car will fail. “Then we package that up into a document, and we have a conversation with a regulator, and we say, ‘This is why we believe we’re safe. What do you think?’”
That sort of technical and political savvy is key, but it may not be what sets Aurora apart from the rest of the field, at least not solely. It’s that sense of humility, the appreciation for just how hard to problem is to crack. So for now, Aurora is focused on completing those features, then perfecting them, knowing from experience that this is hard, and it will take a long time and a lot of hard work. Or, as another supergroup put it:
Well, it’s all right // We’re going to the end of the line.
More Great WIRED Stories
- How to get the most out of Gmail’s new features
- Phone numbers weren’t meant as ID. Now we’re all at risk
- Inside Puerto Rico’s year of fighting for power
- The bot-strewn history of the best kids’ show on Netflix
- The super-secret sand that makes your phone possible
- Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories