Remember last year, when we reported that the Red Ventures-owned CNET had been quietly publishing dozens of AI-generated articles that turned out to be filled with errors and plagiarism?
The revelation kicked off a fiery debate about the future of the media in the era of AI — as well as an equally passionate discussion among editors of Wikipedia, who needed to figure out how to treat CNET content going forward.
“CNET, usually regarded as an ordinary tech [reliable source], has started experimentally running AI-generated articles, which are riddled with errors,” a Wikipedia editor named David Gerard wrote to kick off a January 2023 discussion thread in Wikipedia’s Reliable Sources forum, where editors convene to decide whether a given source is trustworthy enough for editors to cite.
“So far the experiment is not going down well, as it shouldn’t,” Gerard continued, warning that “any of these articles that make it into a Wikipedia article need to be removed.”
Gerard’s admonition was posted on January 18, 2023, just a few days after our initial story about CNET‘s use of AI. The comment launched a discussion that would ultimately result in CNET’s demotion from its once-strong Wikipedia rating of “generally reliable.” It was a grim fall that one former Red Ventures employee told us could “put a huge dent in their SEO efforts,” and also a cautionary tale about the wide-ranging reputational effects that publishers should consider before moving into AI-generated content.
“Let’s take a step back and consider what we’ve witnessed here,” a Wikipedia editor who goes by the name “bloodofox” chimed in. “CNET generated a bunch of content with AI, listed some of it as written by people (!), claimed it was all edited and vetted by people, and then, after getting caught, issued some ‘corrections’ followed by attacks on the journalists that reported on it,” they added, alluding to the time that CNET’s then-Editor-in-Chief Connie Guglielmo — who now serves as Red Ventures’ “Senior Vice President of AI Edit Strategy” — disparagingly referred to journalists who covered CNET’s AI debacle as “some writers… I won’t call them reporters.”
But CNET wasn’t the only outlet caught in the scandal, leading to another driving point of concern for the Wikipedia editors. Futurism discovered similar content over at Red Ventures’ other sites Bankrate and CreditCards.com, and one former employee alleged that disclosure-free AI content was being published to Red Ventures’ vast portfolio of higher education-focused websites.
This wasn’t the first time that a Red Ventures site had faced reliability concerns on Wikipedia. The Red Ventures-owned sites Healthline and The Points Guy both currently sit on Wikipedia’s spam blacklist, the former due to publishing misinformation and the latter for questionable relationships with the credit card companies it covers. That didn’t go unnoticed by the Wikipedia volunteers.
“According to the reporting we’ve seen so far,” bloodofox’s passionate diatribe continued, Red Ventures “evidently implemented these tools and approaches throughout their portfolio but won’t say exactly where or how. And why should we believe anything this company says? Red Ventures has not been remotely transparent about any of this — the company could at best be described as deceitful — and the company runs a big stable of SEO-focused content mills across its ecosystem just like what we’re seeing on post-acquisition CNET.”
“It’s worth looking into how we’re using properties that they own as sources,” the editor added.
The revelations kept coming. In early February, the Verge alleged that Red Ventures personnel had violated CNET’s editorial ethics by pushing the publication’s staff to be more favorable to its advertisers — another blow to the embattled publication’s already-fractured public image.
By then, CNET and Bankrate had both “paused” their AI efforts and issued extensive corrections. But the damage was already done, at least in the Wikipedia editors’ eyes. By mid-February, the editors had concluded that anything published by CNET after its 2020 sale to Red Ventures could no longer be considered “generally reliable,” and thus should be taken with a hefty grain of salt.
Further, they concluded, anything published to CNET between November 2022 and January 2023 should be considered “generally unreliable.” Surely CNET’s human journalists were doing high-quality work during that window. But in the Wikipedia editors’ view, the egregiousness of the AI effort negated the publication’s credibility as a whole.
“In November 2022, CNET began deploying an experimental AI tool to rapidly generate articles riddled with factual inaccuracies and affiliate links, with the purpose of increasing SEO rankings,” reads the notice. “More than 70 finance-related articles written by the AI tool were published under the byline ‘CNET Money Staff’, and Red Ventures issued corrections to over half of them amidst mounting pressure. CNET has since announced it would pause the use of its AI tool ‘for now’, but concerns over its advertiser-driven editorial content remain unresolved.”
Wikipedia’s source guidelines now provide this striking table that sums up the site’s view on CNET: that it was reliable until it was acquired by Red Ventures, unreliable for the period it was caught using AI, and that since 2020 it’s suffered a “deterioration in editorial standards.”
It wouldn’t be the last time that Wikipedia editors would address CNET or Red Ventures’ reliability. This year, following a discussion about the state of the Red Ventures-owned consumer tech site ZDNET, an editor known as “Chess” opened a new thread to address the reliability of Red Ventures’ overall portfolio. Citing the AI saga at CNET and Bankrate, the alleged editorial ethics breaches, and more, Chess argued that Wikipedia should consider knocking every Red Ventures-owned website down a trustworthiness peg.
“We shouldn’t repeatedly put the onus on editors to prove that Red Ventures ruined a site before we can start removing it; they can easily buy or start another,” Chess argued in their opener, published on January 24, reasoning that the quality of content seems secondary to Red Ventures’ SEO-focused business model. “I think we should look at the common denominator here, which is Red Ventures, and target the problem (a spam network) at its source.”
Some editors quickly took Chess’ side. As one cosigner, an editor who goes by “The Kip,” wrote a few hours later: “Between the AI-generated and often blatantly inaccurate content, as well as the SEO/sales/marketing-oriented output, and the decisions previously made regarding CNET and The Points Guy, a fairly easy blanket [change to generally unrelaiable].”
“It’s high time for it. Enough is enough,” a familiar voice, bloodofox, concurred, adding that “if it’s owned by Red Ventures, we need to go ahead and identify it as a hard [Reliable Sources] fail.”
Others, however, weren’t convinced.
“Frankly, categorizing an entire outlet as unreliable because one writer or one editor craps the bed is an overreaction,” replied an editor dubbed “JPxG,” because “doing so because someone at a different outlet owned by the same parent company crapped the bed is medieval.” JPxG later argued that such a system would impart a “guilty until proven innocent” standard of preemptive punishment.
It’s true that ownership changes, however frustrating or unfortunate, are a reality of the media world, and staff typically have no input in them. Red Ventures purchased CNET for $500 million in 2020 following a merger between CBS — which purchased CNET for $1.8 billion in 2010 — and Viacom. That a given publication’s reputation for producing reliable information could be compromised solely by a sale to a new owner would be a sweeping condemnation.
Still, Red Ventures is far from a passive overlord. Its executives have touted the power of AI with a near-fanatical zeal.
“From here on out,” CEO Ric Elias told company employees in a July 2023 all-hands meeting that Futurism obtained audio of, “we are going to become AI.”
At least at CNET, that commitment is now sounding pretty wilted.
“CNET is the world’s largest provider of unbiased tech-focused news and advice,” a CNET spokesperson said in an emailed statement about its demotion on Wikipedia. “We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy.”
“Additionally, previous reporting regarding pressure to write favorably about advertisers is false and unfairly affects our staff’s work and reputations,” the spokesperson added. “We stand by the work that we do, the quality of our content and the editorial integrity of our staff. CNET functions as an independent entity within Red Ventures, led by an independent leadership team.”
In the wake of CNET‘s AI drama, its staff unionized, citing AI and the threat it poses to their “jobs and reputations.” And to that end, it’s worth reiterating that the Wikipedia editors chose not to demote CNET’s pre-Red Ventures journalism; in a testament to CNET’s legacy as a trustworthy publication, its archives remain “generally reliable” by Wikipedia editors’ standards.
But that tension illustrates the depths of CNET’s more recent wounds. Red Ventures, based on several insider accounts, seems to overwhelmingly value quantity over quality. Ultimately, its attempts to squeeze SEO juice out of CNET snowballed into a disaster that compromised the foundations of a brand that took decades of quality journalism to build.
“It’s infuriating that Red Ventures’ decisions have undermined the quality work done by CNET’s writers, editors and producers,” the CNET Media Workers Union told us in a statement, noting that they were unaware of the changes to the outlet’s reliability rating on Wikipedia. “That’s why we’re fighting for a union contract with specific language to protect our bylines, codify editorial standards and implement specific guardrails around AI.”
Of course, Wikipedia editors aren’t the ultimate authority for what qualifies good journalism. But their demotion of CNET is a cautionary tale for other media owners looking to roll out AI in newsrooms.
Hopefully, CNET can claw its way back into the Wikipedia editors’ good graces. Whether that’s possible under Red Ventures’ umbrella, though, remains to be seen. Adding a further wrinkle, Red Ventures is now reportedly exploring a sale of CNET, but having trouble due to concerns around the AI debacle.
“Our staff is dedicated to repairing the damage caused by management,” the CNET union’s statement continued, “and to restoring CNET’s reputation as a trustworthy and reliable site.”
More on CNET: CNET’s Publisher Having Trouble Selling It Due to AI Scandal
Share This Article