“It was shocking and frustrating and disappointing.”
Switcheroo
Writers at the Democrat & Chronicle, a Rochester, New York daily newspaper owned by the American publishing giant Gannett, are outraged after discovering that their owner quietly updated contract language to allow seemingly unlimited use of AI in “news content,” Digiday reports.
In early April, Digiday reports, Chronicle writers received a version of the employment contract that included very precise language regarding where and how AI could be used at the newspaper.
AI “may be used to generate news content that is supplementary to local news reporting,” read the original clause, which the Newspaper Guild of Rochester has since claimed was settled language following contract negotiations, “and is not a replacement for it.”
But later that month, when Chronicle writers received an updated draft, they discovered that this highly specific language had been whittled down to the bone. (Per Digiday, Chronicle journalists say they received no explicit notice from Gannett of the surprise change.)
AI “may be used to generate news content,” read the new version.
This language, of course, is far more generalized, and would seemingly give Gannett the leeway to fully AI-generate entire articles — and not contain AI use to assistive or auxiliary use cases, as the initial contract language would have stipulated.
Put simply, it’s a massive change to make. And rest assured, the human journalists at the Chronicle are not happy.
“It was shocking and frustrating and disappointing,” Justin Murphy, an education reporter at the Chronicle and a Newspaper Guild of Rochester member, told Digiday.
We’ve been alluding to how @Gannett has changed settled language in our contract throughout negotiations. Today we’ve got an example to share, relating to AI.
At left is what the company proposed 4/11, and we accepted.
At right is what they snuck in without notice last week. pic.twitter.com/JrKfG3h7dA
— Newspaper Guild of Rochester (@rocnewsguild) May 2, 2024
Track Record
As Gannett is no stranger to AI scandal, the Chronicle staff’s frustrations aren’t exactly surprising.
Back in August 2023, it emerged that Gannett was using AI to churn out embarrassingly bad writeups of regional high school sports scores, a scandal that ultimately ended in Gannett pausing the effort and issuing massive corrections. Then in October, Gannett was accused by writers at USA Today, which it also owns, of publishing AI-generated content bylined by fake writers — an accusation further corroborated by an investigation by Futurism into the media contractor that provided that fake author-bylined content.
In short, Gannett doesn’t have a great track record regarding responsible AI use. And, as AI scandals like those at Gannett have been known to cause reputational harm to publications, it makes sense that the media giant’s writers — to say nothing of readers — want a say on how AI is used in their newsrooms.
“Even though our unit is small, the kind of terms we get in our contract are extremely important,” Murphy told Digiday, “not just for other newsrooms across Gannett, but for journalists across America.”
More on AI and media: During Huge Demo, Google’s AI Provides Idiotic Advice about Photography That Would Destroy Film
Share This Article