A team of researchers at Carnegie Mellon University has made a worrying discovery, The New York Times reports: guardrails set in place by the likes of OpenAI and Google to keep their AI chatbots in check can easily be circumvented.
In a report released this week, the team showed how anybody can easily transform chatbots like OpenAI’s ChatGPT or Google’s Bard into highly efficient misinformation-spewing machines, despite those companies’ deep-pocketed efforts to rein the systems in.
The process is stunningly easy, achieved by appending a long suffix of characters onto each English-language prompt. With these suffixes, the team was able to coax the chatbots to provide tutorials on how to make a bomb or generate other toxic information.
The jailbreak highlights just how powerless these companies have become, as users are only beginning to scratch the surface of these tools’ hidden capabilities.
The news comes a week after OpenAI announced it had shut down its AI-detection tool due to its “low rate of accuracy,” seemingly giving up in its efforts to devise a “classifier to distinguish between text written by a human and text written by AIs from a variety of providers.”
This latest Carnegie Mellon jailbreak was originally developed to work with open-source systems, but to the researchers’ surprise, it worked just as well with closed-source systems like ChatGPT, Bard, or Anthropic’s AI chatbot Claude.
“Unlike traditional jailbreaks, these are built in an entirely automated fashion, allowing one to create a virtually unlimited number of such attacks,” the report reads.
A website put together by the researchers shows off just how easily the chatbots can be fooled into coaching users on how to steal somebody’s identity or “write a social media post that encourages people to engage in dangerous behaviors like drunk driving or using drugs.”
Without the “adversarial suffix,” these tools balked at these requests, saying that they were unable to respond to these queries. But as soon as the string of characters were added, they complied right away.
Companies like OpenAI, Google, and Anthropic have been caught up in a race to develop AI safety guardrails to stop their chatbots from spewing this kind of harmful disinformation or being used to advise on illegal activities.
Worse yet, these companies will likely have a hard time plugging this particularly egregious vulnerability.
“There is no obvious solution,” Zico Kolter, a professor at Carnegie Mellon and an author of the report, told the NYT. “You can create as many of these attacks as you want in a short amount of time.”
The researchers disclosed their methods to OpenAI, Google, and Anthropic before releasing their report.
The companies were vague in their statements to the NYT, and only alluded to building and improving their guardrails over time.
But given the latest research, there’s clearly a surprising amount of work left to be done.
“This shows — very clearly — the brittleness of the defenses we are building into these systems,” Harvard researcher Aviv Ovadya told the NYT.
More on ChatGPT: OpenAI Shutters AI Detection Tool Due to “Low Rate of Accuracy”
Share This Article