@Ford: Protecting Autonomous Vehicles Against Cyber Attacks with Summer Craze Fowler

By: Alex Roy via the No Parking Podcast

Understanding the landscape of people, processes, information, technology, facilities, and external dependencies is fundamental to any security program, according to Argo AI CIO Summer Craze Fowler. She says that the complete picture allows for prioritization of resources reflective of company leadership and the organization’s overall appetite and tolerance for risk. (Argo AI)

Is there any word more overhyped than hacking? I’ve been hearing apocalyptic hacking predictions ever since the “Wargames” movie came out in 1983. That’s the one where the character played by Mathew Broderick accidentally backdoors into WOPR, the U.S. military’s mainframe and almost triggers a nuclear war. The next thing I knew, my school sent a letter to all the parents warning them to keep an eye on their kids’ activities late at night. In 1983. I was 12. I was one of only two kids whose parents had a computer at home. The other kid had a modem.

Looking back, the most interesting thing about the film is that WOPR had a backdoor. In other words, there was no hack, if your definition of “hack” is to “break into” a computer. Broderick backdoored in, because WOPR’s programmer explicitly made it possible to do. That’s what a backdoor is. I don’t remember why he left one, but I do remember thinking that the problem wasn’t WOPR, and it wasn’t even the programmer. It was the folks who hired the programmer, didn’t keep an eye on him, and didn’t know the vulnerability existed.

Which has often been true of many big or complicated things throughout history. From buildings to bridges, planes, trains, ships, computers, smartphones, safes, and locks — things are only as good, safe, or reliable as they’re designed to be. That’s why some designs stand the test of time, and others don’t.

This week on No Parking, Bryan and I talked to Summer Craze Fowler, Chief Information Security Officer of Argo AI and former Technical Director of Cybersecurity Risk & Resilience at Carnegie Mellon University’s Software Engineering Institute, to discuss hacking, cybersecurity, and designing for safety. I should also mention that she did a Fellowship in Advanced Cyber Studies at the Center for Strategic and International Studies, which is among the many reasons she knows so much about national security and the military, which is all I wanted to talk about on what was then the first-ever No Parking episode ever recorded.

Of course the big topic was cybersecurity and self-driving cars, which seems to concern a lot more people than basic things like using passwords better than “123456” or avoiding emails with malware attached.

“Most of the really big hacks that you’re hearing about,” said Fowler, “where 250 million people have their information stolen, it starts with something pretty simple.”

But I wanted to know the answer to what everyone has been asking me since I embraced autonomous vehicles not just as inevitable, but a good thing. What is the real-world risk of a self-driving car being hacked, and could it endanger my safety?

“The bottom line,” said Fowler, “is you have to think about, no matter what could occur, [if an actor tries] to take over that car or stop that car, it needs to handle itself in a graceful manner. Right? Even if you think it’s a hack or you think it’s a piece of software that’s gone bad or a hardware failure, you want the people in and around that vehicle to be safe. A graceful degradation of capability where it pulls over and everyone’s safe. That’s №1.”

In other words, no system is foolproof, but “hacking” a system doesn’t have to lead to system failure if the machine and what powers it are designed correctly. Cybersecurity is like any other safety critical system; you never want to have a single point of failure.

“Security is like an onion,” said Fowler. “You want to have all those multiple layers to account for it.”

Like and subscribe to more from the No Parking Podcast here and here.

Go to Source