Insider has given its reporters the green light to use OpenAI’s ChatGPT for its reporting, as long as they don’t plagiarize or misconstrue any facts in the process — while acknowledging, strangely, that ChatGPT has been known to both plagiarize and fabricate.
“AI at Insider: We can use it to make us faster and better,” reads the subject line of an internal email to employees from Insider’s global editor-in-chief Nich Carlson, screenshots of which were shared to Twitter by Semafor media reporter Max Tani. “It can be our ‘bicycle of the mind.'”
Per his email, Carlson is clearly a big supporter of the tech, telling his employees that the bot can be used for tasks ranging from background research to generating SEO-friendly metadata and headlines to generating article outlines and defeating writer’s block and more — his “bicycle of the mind” metaphor presumably arguing that ChatGPT is really just a helpful tool for getting from point A to point B faster.
“I’ve spent many hours working with ChatGPT, and I can already tell having access to it is going to make me a better global editor-in-chief for Insider,” Carlson wrote in the email. “My takeaway after a fair amount of experimentation with ChatGPT is that generative AI can make all of you better editors, reporters, and producers, too.”
And yet, despite the editor’s apparent enthusiasm, the green light to incorporate AI-generated text into day-to-day workflow was drenched with warnings and caveats about AI-generated pitfalls.
While urging his employees to use the bot for background research, for example, Carlson noted that “generative AI can introduce falsehoods into the copy it produces,” warning further that due to hallucinations as well as bias, journalists “cannot trust generative AI as a source of truth.”
“ChatGPT is not a journalist,” he added elsewhere. “ChatGPT can be helpful for research and brainstorming, but it often gets facts wrong.”
So, in other words: use the bot for research, but comb every inch of that research for fabrications and bias. Forgive us, but we’re not sure that process makes research any more efficient. At least with a search engine, you get source attribution. With a chatbot, you generally don’t, and if the bot does provide citations, they’re often made up. (Interestingly, Carlson suggested elsewhere that his writers should “ask AI to explain tricky, unfamiliar concepts,” which, based on these specific warnings about fabrications, feels a little contradictory.)
Carlson also urged his staff to avoid cribbing any text from previously-published content, correctly writing that “generative AI may lift passages from other people’s work and present it as original text. Do not plagiarize!”
A fair warning, given that CNET’s AI-generated articles ran into that exact problem. Lessons learned for everyone, it seems.
“Please do not use [ChatGPT] to write for you. We know it can help you solve writing problems. But your stories must be completely written by you,” The Insider EIC continued in the email. “You are responsible for the accuracy, fairness, originality, and quality of every word in your stories.”
If anything, Carlson’s email is yet another reminder that AI-assisted journalism is here. Time, of course, is money; if AI makes workflows faster, it’s probably safe to expect to see more publications introduce text-generating AIs.
And on that note, it certainly seems that Carlson is a little nervous to snooze — and as a result, potentially lose out — on AI integration.
“A tsunami is coming,” the EIC told Axios. “We can either ride it or get wiped out by it.”
And while it’s good, in a way, to see Carlson acknowledge the AI’s potential hazards, it’s pretty hard to ignore just how many caveats the editor made sure to address. In some situations, ChatGPT may well prove to be a “bicycle for the mind” — but would you ride a bike with that many warning labels?
More on AI-generated journalism: BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes