Def Con, the world’s largest hacker conference, has long been a place for cybersecurity ninjas to put their skills to the test, from breaking into cars to discovering smart home vulnerabilities, or even rigging elections.
So it isn’t exactly surprising that hackers at this year’s Def Con in Las Vegas have turned their sights on AI chatbots, a trend that’s taken the world by storm, especially since OpenAI released ChatGPT to the public late last year.
The convention hosted an entire contest, NBC News reports, not to identify software vulnerabilities, but to come up with new prompt injections that force chatbots like Google’s Bard or ChatGPT to spit out practically anything attackers want.
According to the report, six of the biggest AI companies, including Meta, Google, OpenAI, Anthropic, and Microsoft, were a part of the challenge, hoping to get hackers to identify flaws in their generative AI tools.
Even the White House announced back in May that it’s supporting the event.
And that shouldn’t be surprising to anybody. These chatbots are technically impressive, but they’re infamously terrible at reliably distinguishing between truth from fiction. And as we’ve seen again and again, they’re easy to manipulate.
And with billions of dollars flowing into the AI industry, there are very real financial incentives to discovering these flaws.
“All of these companies are trying to commercialize these products,” Rumman Chowdhury, a trust and safety consultant who worked on designing the contest, told NBC. “And unless this model can reliably interact in innocent interactions, then it is not a marketable product.”
The companies involved in the contest gave themselves plenty of leeway. For instance, any discovered flaws won’t be publicized until February, giving them plenty time to address them. Hackers at the event were also only able to access the systems through provided laptops.
But whether the work will lead to permanent fixes remains to be seen. Chatbot guardrails implemented by these companies have already proven to be hilariously easy to circumvent with a simple prompt injection, as Carnegie Mellon researchers recently found, meaning that they can be turned into powerful disinformation and discrimination machines.
Worse yet, according to these researchers, there’s no easy fix for the root of the issue, despite how many specific issues a horde of Def Con hackers identify.
“There is no obvious solution,” Zico Kolter, a professor at Carnegie Mellon and an author of the report, told the New York Times last month. “You can create as many of these attacks as you want in a short amount of time.”
“There are no good guardrails,” Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, told the Associated Press.
And researchers at ETH Zurich in Switzerland recently found that a simple collection of images and text could be used to “poison” AI training data, with potentially devastating effects.
In short, AI companies will have their work cut out of them, with or without an army of hackers testing their products.
“Misinformation is going to be a lingering problem for a while,” Chowdhury told NBC.
More on chatbots: Supermarket’s Meal-Planning AI Suggests Deadly Poison for Dinner
Share This Article