They tried.
Crisis Mode
It’s no secret that OpenAI’s gangbusters chatbot ChatGPT has become the bane of educators trying to get their pupils to turn in some honest work. Cheating will never go away, but chatbots just make it more tempting and easy than ever.
The thing with being a cheat, though, is that a good one has to be careful to cover their tracks — something that some lazy students relying on an AI that does all the work for them appear not to be bothering with.
“I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,'” Timothy Main, a writing professor at Conestoga College in Canada, told The Associated Press.
“I’ve caught dozens,” he said. “We’re in full-on crisis mode.”
No Easy Answers
So far, a one-size-fits-all deterrent has eluded educators. And thus ensues a game of cat-and-mouse.
One option that gained some headway in the months following the chatbot boom was AI detectors. Yet once more people used them, it was obvious that they weren’t reliable, in many cases flagging human-composed prose as AI-generated. OpenAI released its own detection tool in February, and failed so dismally that it was scrapped months later.
This more or less leaves the question down to each educator’s own judgment. Last semester, Main logged 57 cases of students cheating — half of them involved AI. But Main says that AI plagiarism can be harder to weed out, since the text it spits out is unique.
Another approach: asking questions that ChatGPT can’t answer. According to Bill Hart-Davidson, an associate dean at Michigan State University’s College of Arts and Letters, educators could instead give error-filled descriptions on a topic that students have to correct.
“Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ That’s not going to work anymore, because ChatGPT will spit out a perfectly fine answer to that question,” he told the AP.
The Paper Route
Going analog could also cut down on AI-facilitated cheating, at least for work that’s completed in class like tests.
“There is going to be a big shift back to paper-based tests,” Bonnie MacKellar, a computer science professor at St John’s University, told the AP.
“I hear colleagues in humanities courses saying the same thing: It’s back to the blue books,” she added.
MacKellar, fearing plagiarized programming, requires her intro course students to write their code on paper — which seems like a pretty Draconian measure, given the subject matter.
It’s not a great situation for students, either, who will have to endure their professors being insufferable about this stuff. One complained to the AP about having to repeatedly rewrite their papers so it doesn’t get flagged as being AI-generated. And that, we have to say, does suck.
More on AI: Author Annoyed to Find Amazon Selling AI-Generated Books Under Her Name
Share This Article