“There were some weird patterns and phrases that were in his reporting.”
Hack Job
If you’re going to use AI to write for you, and if you also happen to answer to an editor, don’t. It’s not just lazy, it’s plain stupid, because a keen reader can always sniff out the BS.
Case in point: Aaron Pelczar, a green reporter at Wyoming newspaper the Cody Enterprise tried to get away with using AI to do his job — and now he’s out of one after a competitor exposed his fraud, The New York Times reports.
Pelczar resigned on August 2, just two months after he started at the newspaper. Investigators discovered that he wasn’t just using a large language model to write the body of his articles, but to fabricate entire direct quotes, too, an egregious lapse of journalistic integrity in the age of AI.
“There were some weird patterns and phrases that were in his reporting,” CJ Baker, a staff writer at rival local paper the Powell Tribune who broke the story, told the NYT.
On the Scent
Maybe a more conniving fraudster could’ve kept the charade up for longer. The fabricated quotes — which included those that claimed to be from government agencies and even the state’s governor — read stiffly, and according to Baker, sounded more like the stuff put in news releases than what a person would say aloud.
And the thing about quotes is that they’re usually attributed to people. So Baker “went back and started checking on quotes that appeared in this reporter’s stories that had not appeared in other publications or in press releases or elsewhere on the web,” he told the NYT, and found seven that had never spoken to Pelczar.
After Baker shared his findings with the Cody Enterprise, the paper launched an investigation, leading to Pelczar’s resignation.
Cody Enterprise editor Chris Bacon issued an apology in the paper’s editorial on Monday.
“I apologize, reader, that AI was allowed to put words that were never spoken into stories,” Bacon wrote.
Automated Fakery
Fraud in journalism predates AI, but the technology’s capabilities potentially make fake reporting easier and more tempting than ever. Because if there’s anything chatbots are good at, it’s quickly churning out large amounts of text and confidently making up facts.
But the temptation doesn’t just apply to individual reporters who are in over their heads: entire publications have deceptively leveraged large language models, too. Last year, for example, we caught Sports Illustrated publishing entire AI-generated product reviews under fake bylines.
Understandably, AI’s place in the newsroom remains a fraught topic. Beyond the existential threat it poses to the industry, its use could also undermine the reputation of publications.
“There’s just no way that these tools can replace journalists,” Alex Mahadevan at the Poynter Institute, a journalism think tank, told the NYT. “But in terms of maintaining the trust with the audience, it’s about transparency.”
More on AI: The Atlantic’s Staff Is Furious About Its Deal With OpenAI
Share This Article