In a high-tech collegiate nightmare, a student was falsely accused of using artificial intelligence to cheat on a paper and then forced to defend her name during a school investigation.
As Rolling Stone reports, Louise Stivers, a graduating senior at the University of California Davis, had a paper flagged by plagiarism-checking software Turnitin, which landed her in hot water with the school’s administration.
“I was, like, freaking out,” she told the magazine.
With no outside support, Stivers was forced to defend herself and prove that she hadn’t used an AI chatbot to write her paper, adding to the stress of finishing her last semester of school. Even her grades began to slip as a result, she argued.
The incident highlights the significant flaws of tools like Turnitin, and how they are increasingly being used to falsely accuse students of using AI to cheat in the classroom.
“It was definitely very demotivating,” the 21-year-old political science student told Rolling Stone, calling the entire debacle a “huge waste of time” that could have been spent “doing homework and studying for midterms” — and working on her applications to law school to boot.
Stivers later learned, as Rolling Stone notes, that she wasn’t even the only UC Davis student wrongfully accused of cheating based on AI-detection software.
Just a few days before being subjected to her academic integrity review earlier this year, USA Today published a similar story centering on senior history major William Quarterman, whose professor failed him and wrongfully accused him of plagiarism after using the AI-detection tool GPTZero. As a result, Quarterman had to go through the same academic integrity process as Stivers.
The proximity and similarity of their cases ended up being a boon for Stivers, however, as Quarterman and his father gave her a “lot of advice” and helped her navigate Davis’ confusing academic review process.
“When you’re applying to law school, it’s a lot of pressure to keep up your GPA,” she said. “It’s just not fun to have to figure out the school’s complicated academic integrity policies while doing classes.”
The reality is that tools like Turnitin and GPTZero aren’t very good at what they were designed to do. In our own testing earlier this year, we found that GPTZero fell far short of the mark. Even ChatGPT maker OpenAI’s own detection app failed to reliably tell human-written from AI-generated text.
Eventually, school administrators admitted to Stivers that Turnitin’s AI detection tool was in beta testing and that it had gotten “early access” to it. The software company admits on its website that although it is purportedly 98 percent accurate, there are “false positives,” and a representative from the company asked the magazine to ask Stivers to report her experience for feedback.
Similarly, OpenAI notes on its website that its own tool “isn’t always accurate” and “should not be the sole piece of evidence, when deciding whether a document was generated with AI.”
Although she was eventually cleared by the UC Davis administration, Stivers said that the investigation remains on her record and that she’ll have to self-report it to law schools and state bar associations.
The school, on its end, has not apologized, according to Stivers. In short, she’s forced to be on the hook for the software’s mistake — and it could affect her career.
It’s a worrying reality. With academics making use of deeply flawed AI plagiarism-detecting apps, more students will likely be wrongfully accused in the near future — which could have devastating consequences for them.
More on AI in academia: Professor Falsely Accuses Students of Cheating Because ChatGPT Told Him To