Whether you run IT for a massive organization or simply own a smartphone, you’re intimately familiar with the unending stream of software updates that constantly need to be installed because of bugs and security vulnerabilities. People make mistakes, so code is inevitably going to contain mistakes—you get it. But a growing movement to write software in a language called Rust is gaining momentum because the code is goof-proof in an important way. By design, developers can’t accidentally create the most common types of exploitable security vulnerabilities when they’re coding in Rust, a distinction that could make a huge difference in the daily patch parade and ultimately the world’s baseline cybersecurity.
There are fads in programming languages, and new ones come and go, often without lasting impact. Now 12 years old, Rust took time to mature from the side project of a Mozilla researcher into a robust ecosystem. Meanwhile, the predecessor language C, which is still widely used today, turned 50 this year. But because Rust produces more secure code and, crucially, doesn’t worsen performance to do it, the language has been steadily gaining adherents and now is at a turning point. Microsoft, Google, and Amazon Web Services have all been utilizing Rust since 2019, and the three companies formed the nonprofit Rust Foundation with Mozilla and Huawei in 2020 to sustain and grow the language. And after a couple of years of intensive work, the Linux kernel took its first steps last month to implement Rust support.
“It’s going viral as a language,” says Dave Kleidermacher, vice president of engineering for Android security and privacy. “We’ve been investing in Rust on Android and across Google, and so many engineers are like, ‘how do I start doing this? This is great.’ And Rust just landed for the first time as an officially recognized and accepted language in Linux, so this is not just Android, it’s any system based on Linux now can start to incorporate Rust components.”
Rust is what’s known as a “memory safe” language because it’s designed to make it impossible for a program to pull unintended data from a computer’s memory accidentally. When programmers use stalwart languages that don’t have this property, including C and C++, they have to carefully check the parameters of what data their program is going to be requesting and how—a task that even the most skilled and experienced developers will occasionally botch. By writing new software in Rust instead, even amateur programmers can be confident that they haven’t introduced any memory safety bugs into their code.
A program’s memory is a shared resource used by all of its features and libraries. Imagine a calendar program written in a language that isn’t memory safe. You open your calendar and then request entries for November 2, 2022, and the program fetches all information from the area of your computer’s memory assigned to store that date’s data. All good. But if the program isn’t designed with the right constraints, and you request entries for November 42, 2022, the software, instead of producing an error or other failure, may dutifully return information from a part of the memory that’s housing different data, maybe the password you use to protect your calendar or credit card number you keep on file for premium calendar features. And if you add a birthday party to your calendar on November 42, it may overwrite unrelated data in memory instead of telling you that it can’t complete the task. These are known as “out of bounds” read and write bugs, and you can see how they could potentially be exploited to give an attacker improper access to data or even expanded system control.
Another common type of memory safety bug, known as “use-after-free,” involves a situation where a program has given up its claim to a portion of memory (maybe you deleted all your calendar entries for October 2022), but mistakenly retains access. If you later request data from October 17, the program may be able to grab whatever data has ended up there. And the existence of memory safety vulnerabilities in code also introduces the possibility that a hacker could craft, say, a malicious calendar invitation with a strategically chosen date or set of event details designed to manipulate the memory to grant the attacker remote access.