This is a disgrace.
AIvory Tower
With those AI-generated images of a supersized rat dick and now with papers blatantly written by ChatGPT or other chatbots, the academic community clearly has a huge AI problem.
As 404 Media reports, AI-generated papers are being passed off not only in low-quality academic journals, but in some reputable ones as well — highlighting not only the increasing prevalence of AI, but also of existing issues about quality, admissions standards, and pay-to-play business structures in academia as well.
Over the past few weeks, researchers have taken to X-formerly-Twitter to showcase the problem, which is easily revealed by searching Google Scholar, the search giant’s journal database, for giveaway phrases like “As of my last knowledge update” and “I don’t have access to real-time data.”
In one post by the blog Life After My Ph.D., numerous papers with the phrases were apparent.
It gets worse. Apparently if you search “as of my last knowledge update” or “i don’t have access to real-time data” on Google Scholar, tons of AI generated papers pop up. This is truly the worst timeline. pic.twitter.com/YXZziarUSm
— Life After My Ph.D. (@LifeAfterMyPhD) March 18, 2024
Mixed Bag
When looking through the journals highlighted by the blogger, it appears that some are predatory, as folks in academia call journals that will publish just about anything for the right amount of money — but others, it seems, aren’t as clearly full of crap.
As 404 notes, at least one paper seems to be so blatantly copy-pasted from a chatbot that the people who submitted it to the respected chemistry journal Surfaces and Interfaces, which published the article after peer review, didn’t even take out the chatbot’s introduction.
Posted by Bellingcat researcher Kolina Koltai, a screenshot of the paper, titled “The three-dimensional porous mesh structure of Cu-based metal-organic-framework – aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries,” includes in its introduction the phrase “Certainly, here is a possible introduction for your topic,” which sounds a whole lot like an algorithmically-polite response to an AI prompt if there ever was one.
In an interview with 404‘s Emanuel Maiberg, journal editor and Boston College physics professor Michael J. Naughton that the publication is “addressing” the paper, which as of press time is still up on the medical journal database ScienceDirect where Surfaces and Interfaces lives.
When Futurism did a similar search on Google Scholar, the journal database returned some results that seemed primarily to either be using the OpenAI chatbot as a “co-author” or were otherwise demonstrating its failings as a research and writing tool — but a lot of other material where it seems almost certain that academics lazily used AI to cook up verbiage.
The presence of these papers, especially at respectable journals, seems to signal that AI has infiltrated academia even more fully than we already thought — and until journals start enforcing their own standards, it will keep muddying the waters of what are supposed to be our culture’s bastions of intellect.
More on chatbots: Lazy High School Teachers Using ChatGPT for Grading
Share This Article