CAPTCHAs are officially over.
CAPTCHA Real Smooth
Researchers have found that bots are shockingly good at completing CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), which are those small, annoying puzzles designed — ironically — to verify that you’re really human.
In fact, as the team led by Gene Tsudik at the University of California, Irvine discovered, the bots are actually way better and faster at solving these tests than us, a worrying sign that the already-aging tech is on its way out.
As detailed in a yet-to-be-peer-reviewed paper, the researchers found that despite CAPTCHAs having “evolved in terms of sophistication and diversity” over roughly two decades, techniques to “defeat or bypass CAPTCHAs” have also vastly improved.
“If left unchecked, bots can perform these nefarious actions at scale,” the paper reads.
“We do know for sure that [the tests] are very much unloved. We didn’t have to do a study to come to that conclusion,” Tsudik told New Scientist. “But people don’t know whether that effort, that colossal global effort that is invested into solving CAPTCHAs every day, every year, every month, whether that effort is actually worthwhile.”
Turing Guessed
The researchers found that 120 of the 200 most popular websites used CAPTCHAs to verify users were human. They then asked 1,400 participants of various levels of tech savviness to complete a total of 14,000 of these CAPTCHAs and compared the accuracy to bots designed to defeat the puzzles.
The major takeaway: CAPTCHA-beating bots that have been created by researchers over the years soundly beat these human participants, not only in speed, but accuracy as well: human accuracy ranged from 50 to 84 percent, while bots boasted a stunning 99.8 percent accuracy.
“There’s no easy way using these little image challenges or whatever to distinguish between a human and a bot anymore,” co-author Andrew Searles, also a researcher at UC Irvine, told New Scientist.
Recent progress in the development of machine learning has given the bots a huge leg up. In fact, OpenAI’s GPT-4 was even able to fool a human into solving a CAPTCHA on its behalf earlier this year.
“In general, as a concept CAPTCHA has not met the security goal, and currently is more an inconvenience for less determined attackers,” Shujun Li at the University of Kent, UK, who was not involved in the study, told New Scientist, adding that we need “more dynamic approaches using behavioral analysis.”
More on CAPTCHAs: Uh Oh, OpenAI’s GPT-4 Just Fooled a Human Into Solving a CAPTCHA
Share This Article