Fake bylines. Content farming. Affiliate fees. What happens when private equity takes over a storied news site and milks it for clicks?
Share this story
Every morning around 9AM ET, CNET publishes two stories listing the day’s mortgage rates and refinance rates. The story templates are the same every day. Affiliate links for loans pepper the page. Average rates float up and down day by day, and sentences are rephrased slightly, but the tone — and content — of each article is as consistent as clockwork. They are perfectly suited to being generated by AI.
The byline on the mortgage stories is Justin Jaffe, the managing editor of CNET Money, but the stories aren’t listed on Jaffe’s actual author page. Instead, they appear on a different author page that only contains his mortgage rate stories. His actual author page lists a much wider scope of stories, along with a proper headshot and bio.
CNET is the subject of a swirling controversy around the use of AI in publishing, and it’s Jaffe’s team that’s been at the center of it all. Last week, Futurism reported that the website had been quietly publishing articles written using artificial intelligence tools. Over 70 articles have appeared with the byline “CNET Money Staff” since November, but an editorial note about a robot generating those stories was only visible if readers did a little clicking around.
It wasn’t just readers that were confused about what stories on CNET involve the use of AI. Beyond the small CNET Money team, few people at the outlet know specific details about the AI tools — or the human workflow around them — that outraged readers last week, according to current and former staffers who spoke to The Verge on the condition that they remain anonymous. Under the two-year-old management of a private equity company called Red Ventures, CNET’s editorial staff has often been left wondering: was this story written by AI or a co-worker? Even today, they’re still not sure.
Daily mortgage rate stories might seem out of place on CNET, slotted between MacBook reviews and tech news. But for CNET parent company Red Ventures, this SEO-friendly content is the point.
CNET was once a high-flying powerhouse of tech reporting that commanded a $1.8 billion purchase price when it was acquired by CBS in 2008. Since then, it has fallen victim to the same disruptions and business model shifts as the rest of the media industry, resulting in CBS flipping the property to Red Ventures for just $500 million in 2020.
Red Ventures’ business model is straightforward and explicit: it publishes content designed to rank highly in Google search for “high-intent” queries and then monetizes that traffic with lucrative affiliate links. Specifically, Red Ventures has found a major niche in credit cards and other finance products. In addition to CNET, Red Ventures owns The Points Guy, Bankrate, and CreditCards.com, all of which monetize through credit card affiliate fees. The CNET AI stories at the center of the controversy are straightforward examples of this strategy: “Can You Buy a Gift Card With a Credit Card?” and “What Is Zelle and How Does It Work?” are obviously designed to rank highly in searches for those topics. Like CNET, Bankrate and CreditCards.com have also published AI-written articles about credit cards with ads for opening cards nestled within. Both Bankrate and CreditCards.com directed questions about the use of AI to Lance Davis, the vice president of content at Red Ventures; CNET’s disclosure also included Davis as a point of contact until last week.
This type of SEO farming can be massively lucrative. Digital marketers have built an entire industry on top of credit card affiliate links, from which they then earn a generous profit. Various affiliate industry sites estimate the bounty for a credit card signup to be around $250 each. A 2021 New York Times story on Red Ventures pegged it even higher, at up to $900 per card.
Viewed cynically, it makes perfect sense for Red Ventures to deploy AI: it is flooding the Google search algorithm with content, attempting to rank highly for various valuable searches, and then collecting fees when visitors click through to a credit card or mortgage application. AI lowers the cost of content creation, increasing the profit for each click. There is not a private equity company in the world that can resist this temptation.
The problem is that there’s no real reason to fund actual tech news once you’ve started down that path.
On CNET senior editor Rae Hodge’s last day, she sent a goodbye email to hundreds of her co-workers imploring them to look more skeptically at their AI co-workers. Her email began with a screenshot of a ChatGPT-generated resignation letter. “I am writing this letter using AI-generated content,” the note reads. “While I may not have personally composed these words, I hope they convey the sincere appreciation I have for my colleagues and the work we have done together.”
In the email, obtained by The Verge, Hodge goes on to direct colleagues to ask pointed questions of several Red Ventures executives, saying that unattributed AI-written content was being sent to subscribers of a cybersecurity email newsletter. What’s worse, the newsletters had errors in them that “could cause direct harm to readers,” Hodge wrote in the email.
A former CNET employee says that Red Ventures was using automated technology for content long before the AI byline began cropping up in November. They say a tool called Wordsmith — nicknamed “Mortgotron” internally because of its use in mortgage stories — has been used for at least a year and a half.
But the siloed nature of the teams across CNET and Red Ventures has made it difficult for journalists at the site to understand the chain of command — in particular, who’s using what tools and when. Those who knew of the AI tool and its uses say that the workflow was so unclear, they sometimes couldn’t distinguish between AI-written stories and articles written by colleagues.
“It’s used most often in relation to relaying updated mortgage and refinance rates,” said one source who was familiar with the tool. “I was told that it was always effectively a bot writing these stories.”
Are you a former or current CNET/Red Ventures employee? I’d love to hear from you. Contact me at mia@theverge.com and I’ll share my Signal.
But even though some on staff knew automation tools were part of the workflow, the scope of their use was unclear to colleagues whose bylines were appearing on the same site. The former staffer says that by the time stories were published on the site, they didn’t always know if AI tools were involved in the production.
“Sometimes the Money writers write like they’re bots, too, and they’re regular humans,” a former employee says. “The quality of writing is nearly indistinguishable. That does not make it good.”
But the robot articles published on CNET don’t need to be “good” — they need to rank highly in Google searches so lots of people open them and click the lucrative affiliate marketing links they contain.
CNET staff was notified last fall that some articles would be written by AI, but by the time they found out, several stories had already been published on other Red Ventures websites, according to one staffer. Those sites also lack clarity around what exactly AI is being used for. On Bankrate, one article originally published in May was bylined by a human writer; it’s now been updated to list an AI author. The content of the story, though, is the same.
“I don’t know that it was announced in any kind of grand way,” a CNET staffer told The Verge. “It just sort of showed up.”
The justification for the tool given to staff, multiple people say, was that it was a way to generate content that would take human writers longer — handling the “dull SEO-friendly topics” or making sure that legal requirements for writing about finance are met. It was sold as a way to free up staff time so they could do more thoughtful work. Instead, several staffers have departed since November, and morale is low at the outlet after several rounds of layoffs, according to former employees.
Reached for comment, Red Ventures refused to answer any questions about the AI tools it uses, the types of content it generates, or how it disclosed the practices to readers. Instead, an unnamed spokesperson directed The Verge to CNET editor-in-chief Connie Guglielmo’s note defending the use of AI tools at the outlet.
Using AI in journalism has a much longer history than CNET’s ventures. The Associated Press was one early adopter and announced it would start using Wordsmith in 2014 to produce short articles about companies’ earnings reports. In 2016, it expanded coverage to include sports reporting, and it now partners with another AI writing firm called Data Skrive for this content.
As with CNET, The Associated Press frames its use of AI as a way to “free journalists to do more journalism and less data processing.” The stories it’s automated are high-volume and formulaic, work that bores journalists but is a necessary backbone for wire services like the AP. The AP labels some stories as automated with a footer noting the use of automated tools to create the story, but a reader may not understand what it means that the story was created “using technology.”
Red Ventures’ experiments with AI content reflect improvements in the world of AI since 2014. A new breed of AI language models is able to easily generate text that is more coherent and covers a wide array of subjects. Studies have found that humans are unable to consistently distinguish between text written by humans and the latest AI systems, leading to exactly the sort of confusion CNET sources have described. Although Red Ventures has refused to offer more details about the tools it’s using, job descriptions of the company’s staff suggest it is indeed tapping the latest generation of technology and applying it widely.
The company’s “content head for all AI and automated solutions” for educational websites, Kevin Hughes, says on LinkedIn that he uses not only Wordsmith but also OpenAI’s GPT series “to generate programmatic SEO and bespoke AI content across a dozen websites, generating millions of dollars in revenue.” Hughes lists a number of Red Ventures websites that he’s worked on, including bestcolleges.com, nursejournal.org, and cyberdegrees.org.
With improvements in AI language models over the past few years, experts have warned about potential malicious use cases. Some of the more exotic include automated propaganda and influence peddling, but the more prosaic include mass-produced spam and marketing copy. In 2021, Fabian Langer, the founder of an AI writing startup named AI Writer, told The Verge how his tools were already being used to fill “SEO farms” with content. Said Langer: “For these [SEO] farms, I do not expect that people really read it. As soon as you get the click, you can show your advertisement, and that’s good enough.”
The cheapness and ease with which these tools can generate content has led some to predict that this writing could slowly take over the web, polluting search results and social media with text designed only to push someone to a specific product or website.
Red Ventures has shown interest in AI products beyond using them at the news sites it owns. Last year, Red Ventures led a $10.6 million fundraising round for Rephrase.ai, a generative AI company that produces sets of customized videos based on one original clip of a person speaking.
Internally, there has been unease among CNET staff at their corporate owners’ use of artificial intelligence — though staff was assured the current test is limited in scope. But layoffs and restructuring, coupled with the lack of clarity on the use of new tools, are causing some to worry about what the creep of AI signals for the venerated site so many journalists were drawn to.
“I don’t lay any blame at CNET’s or its masthead’s feet,” one former staffer says. “This is all due to the machinations of the greater Red Ventures machine, and its desire to squeeze blood from a stone.”
After multiple rounds of layoffs last year, dozens of people lost their jobs, from audience and copy teams to CNET cars staff. Entire teams were decimated, one former staffer says, and people continue to leave “in droves,” fearing more layoffs are around the corner.
The departure email sent by Hodge acknowledges the good work done by her CNET co-workers and warns of the road ahead with regards to journalistic integrity and editorial standards.
“It pains me to leave this outlet with the knowledge that those colleagues’ battle to maintain and strengthen the editorial credibility of CNET will continue to be one fought uphill,” she wrote.
The vision of a dark future where robots sap up jobs is a common refrain in journalism. But a former staffer says more familiar tactics to boost margins — like the layoffs that have gutted teams at CNET — are top of mind for remaining employees.
“They do not fear AI more than they fear the numerous layoffs Red Ventures has insisted upon,” a former employee says. “Everyone at CNET is more afraid of Red Ventures than they are of AI.”