Scientist Horrified as ChatGPT Deletes All His Research

ChatGPT may be an excellent tool in case your strongly-worded email to your landlord about that ceiling leak needs a second pair of eyes. It also excels at coming up with a rough first draft for non-mission-critical writing, allowing you to carefully pick it apart and refine it.

But like all of its competitors, ChatGPT is plagued by plenty of well-documented shortcomings as well, from rampant hallucinations to a sycophantic tone that can easily lull users into gravely mistaken beliefs.

In other words, it’s not exactly a tool anybody should rely on to get important work done — and that’s a lesson University of Cologne professor of plant sciences Marcel Bucher learned the hard way.

In a column for Nature, Bucher admitted he’d “lost” two years’ worth of “carefully structured academic work” — including grant applications, publication revisions, lectures, and exams — after turning off ChatGPT’s “data consent” option.

He disabled the feature because he “wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data.”

But to his dismay, the chats disappeared without a trace in an instant.

“No warning appeared,” Bucher wrote. “There was no undo option. Just a blank page.”

The column was met with an outpouring of schadenfreude on social media, with users questioning how Bucher had gone two years without making any local backups. Others were enraged, calling on the university to fire him for relying so heavily on AI for academic work.

Some, however, did take pity.

“Well, kudos to Marcel Bucher for sharing a story about a deeply flawed workflow and a stupid mistake,” Heidelberg University teaching coordinator Roland Gromes wrote in a post on Bluesky. “A lot of academics believe they can see the pitfalls but all of us can be naive and run into this kind of problems!”

Bucher is the first to admit that ChatGPT can “produce seemingly confident but sometimes incorrect statements,” arguing that he never “equated its reliability with factual accuracy.” Nonetheless, he “relied on the continuity and apparent stability of the workspace,” using ChatGPT Plus as his “assistant every day.”

The use of generative AI in the scientific world has proven highly controversial.

Scientific journals are being flooded with poorly sourced AI slop, turning the process of peer review into a horror show, as The Atlantic reported this week. Entire fraudulent scientific journals are popping up to capitalize on others who are trying to get their AI slop published. The result? AI slop being peer-reviewed by AI models, further entrenching the polluting of scientific literature.

For their part, scientists are constantly being informed of how their work is being cited in various new papers — only to find that the referenced material was entirely hallucinated.

To be clear, there’s zero evidence that Bucher was in any way trying to sell off AI slop to his students or get dubious, AI-generated research published.

Nonetheless, his unfortunate experience with the platform should serve as a warning sign to others.

In his column, Bucher accused OpenAI of selling subscriptions to its ChatGPT Plus despite not assuring “basic protective measures” to stop years of his work from vanishing in an instant.

In a statement to Nature, OpenAI clarified that chats “cannot be recovered” after being deleted, and challenged Bucher’s claim that there was “no warning,” saying that “we do provide a confirmation prompt before a user permanently deletes a chat.”

The company also helpfully recommended that “users maintain personal backups for professional work.”

More on scientific AI slop: The More Scientists Work With AI, the Less They Trust It

Go to Source