CNET Was Treating Staff Like Robots Long Before Publishing AI-Generated Articles

When I returned to work from short-term disability leave, my editor at the CNET Money team asked if I had any new ideas. 

I did. It was January 2022, and the US Department of Labor had just published its Employment Situation Report. Among its findings, millions of people were out of work because their employers had closed or lost business thanks to the pandemic. I wanted to interview some of those people and write a timely, incisive piece about the human and financial toll of COVID.

Over Zoom, my editor shifted in his chair, and I could see the wheels turning in his head. Great idea and initiative, he said finally, but had I heard that Bank of America had cut its overdraft fees? Great for its customers, right? Could I cover that instead? 

I had anticipated the deflection. I poured myself three fingers of gin and churned out the article. It appeared the next day on CNET’s website as a stenographic retelling of a corporate press release — grafted onto a cornucopia of affiliate links to banks, including Bank of America, each of which pays CNET a lucrative kickback if a reader opens an account.

After months of persistent doubt, I’d had enough. I resigned three days later.

From the vantage point of a former member of the CNET Money team, it’s been harrowing to watch it get busted using a little-tested and poorly disclosed AI system to churn out error-ridden and plagiarized content. As the scandal unfolded, CNET’s editor-in-chief Connie Guglielmo rationalized that “AI engines, like humans, make mistakes.” But she failed to address what those mistakes might cost in reader trust, or how exactly the bot’s relentless and egregious mistakes slipped past the human editors that she insists were reviewing the AI-generated content. 

The reality is that readers’ ability to stomach AI’s presence in journalism will likely depend on their definition of quality. 

“Whenever anyone says anything about AI in journalism, listen carefully because they’re probably telling you what they think AI is and what they think journalism should be,” said Mike Ananny, a professor of communication and journalism at the University of Southern California, adding that “if you think that journalism is what lets us create politics, solve injustices, and drive public life, then you need to hold journalistic AI to incredibly high standards that go beyond merely analyzing or producing information.”

Red Ventures seemingly confirmed its stance after Guglielmo’s statement when CNET’s sister site, Bankrate, ran another AI-generated article riddled with errors. When Futurism reached out with questions, it quickly disappeared from the site.

CNET never told me I’d be cranking out affiliate bait. 

When I interviewed for the job in the spring of 2021, the listing was for a “versatile writer who can cover the intersection of personal technology and personal finance,” by demystifying complicated financial topics, interviewing industry experts, and contributing to videos and podcasts. I’d spent years breaking into the journalism industry, picking up bylines at the New York Times and the Washington Post, and the job seemed like a perfect opportunity to do ambitious reporting about both the human and technical sides of the complex and ever-shifting finance industry.

Leadership at the company encouraged that idea, sharing a coherent plan for CNET Money that would closely align with its News team’s reporting standards. While basic explainers would be a small part of my job, I was told the site’s finance coverage would aim to rival CNBC‘s, with substance reminiscent of The Atlantic’s acclaimed Technology section. I accepted the position in May and thought I’d found my professional home. 

Alas, the foundational cracks appeared early. My first assignment was an affiliate-heavy explainer on how to use Amazon’s zero-interest payment options for Prime Day. I had never used the credit card or the payment options I was tasked with recommending, and the credit card’s disclosure statement was deemed sufficient in terms of sourcing. When I suggested we add language about how to use payment plans responsibly, my editor told me not to worry; we’d cover that topic in a different article. 

Mortgage and credit card explainers filled my days in the months that followed, and the grand vision for ambitious journalism never materialized. I amassed over 100 pitches in a file — the effects of COVID-related unemployment, childcare costs, rising poverty rates in underserved communities — and occasionally trotted them out to be shot down by my editor. 

I was also living in a kind of personal hell. By the time I took disability leave in late September, my leg had been broken for over a year, which CNET executives were aware of when they offered me the position. I shuffled in and out of operating rooms as doctors attempted to coax my femur back together. Red Ventures HR was initially supportive and encouraged me to file a short-term disability claim with its insurance partner. But when I extended my leave in October, the insurance partner started maneuvering to avoid paying my claim. HR reassured me they would look into it, but seemed uninterested in getting involved beyond a cursory email. 

By January, the nature of the company’s business model had revealed itself. After years of being gaslit amid my health struggles, I recognized the red flags as every journalistically important pitch took a backseat to affiliate-laced content.

As I downed the last sip of gin and filed the Bank of America article, I wondered: what was the point of staying? My disability benefits hadn’t been upheld, and my experience trying to get them through Red Ventures’ insurer was horrible.

Plus I wasn’t confident in CNET’s editorial goals or standards, and I felt misled about the job I’d been hired for in the first place. Where was the drive to surpass CNBC‘s coverage or The Atlantic’s deep reporting? Had those goals been real, or were they a bait-and-switch narrative designed to lure qualified writers into a profitable assembly line on CNET‘s factory floor? 

I had no interest in finding out. I sent my resignation to HR, Guglielmo, and my editor. 

Life improved after leaving CNET

My leg healed and I spent the summer relearning to walk and playing with my son in Washington’s North Cascade mountains. With the help of a lawyer, Red Ventures’ insurer eventually paid out what I was owed.

By the beginning of 2023, I had signed freelance contracts with the Seattle Times, Insider, and a corporate consulting firm. I was outlining a story the afternoon when the first emails came in. 

SUBJECT: CNET AI CONTENT—time to talk?

SUBJECT: CNET plagiarism

SUBJECT: Still employed at CNET? Interview request

Sorry, what?

I tried to figure out what was going on, and quickly landed on Futurism‘s story exposing CNET’s use of AI to churn out the types of finance stories I used to write. The Verge‘s damning coverage of Red Ventures’ financial model soon followed, and I found myself staring at my old CNET bio page.

I hadn’t visited it since quitting, but I was surprised to find I was still listed as a current employee. Below my bio was a contact form that encouraged readers to email me with questions. 

The journalists who reached out had pointed questions. How much did I know about CNET’s AI use? Had my colleagues or I used AI to draft or edit articles? Was my work even original? 

I forwarded the journalists’ emails to my lawyer and spent the next several hours engaged in a mortifying task: plugging my own CNET work into plagiarism detection software and comparing it to the original drafts on my hard drive. 

Further reporting that CNET had allegedly been using AI to rewrite existing work, with some staffers saying they weren’t even sure what material published on the site had been written by AI and what had been written by humans, worried me even more. 

I soon realized that as long as my byline remained on the website, it was likely only a matter of time before CNET started butchering my work with AI, further threatening my credibility. My lawyer agreed and encouraged me to “apply pressure.” 

I emailed Guglielmo and Red Ventures HR and asked them to remove all mention of my name from the site. A familiar silence stretched into the next day, so I drafted another message and included a cross-section of the company’s leadership. No response, but “Sarah Szczypinski, Staff Writer” transformed to “Sarah Szczypinski, Former Staff Sriter.” I emailed again; “Sriter” reverted to “Writer” shortly after. Still, the bio page and contact form remained. 

I pondered Red Ventures’ carelessness, arrogance, and apparent willingness to scrap nearly three decades’ worth of credibility in exchange for short-term profit. I thought about the journalists who still worked there, many of whom would have plenty to say about the AI debacle, but were also limited by their non-competes if they resigned in protest. Based on my experience, I knew Red Ventures would never publicly acknowledge the professional damage its reckless AI was doing to the reputation of its current and former human reporters. 

I called my lawyer and told her to prepare for the next step — and sent a final email to CNET

Sriter to writer! Progress. Now, contact form. Get my name off the articles tainted by the suspicion they were written and edited by AI. 

Do you know what it took for me to become a journalist? The student loans? Did you read my cover letter about leaving poverty, my resignation letter about medical debt, or the 49 ideas I brought to my interview? Do you feel the rage at the audacity of throwing my name into this cesspool you’ve created for profit? My career is my identity, and I’ll protect it like my own child. You’re the ones who put up the ‘trust’ disclaimer; I will make that hypocrisy hurt. Tick fucking tock. 

That finally did it. The bio page and contact form disappeared overnight, and the byline on my work now reads simply “CNET Staff.” HR emailed the next morning to say “thank you for reaching out.”

Less than an hour later, Guglielmo’s statement defending the AI appeared online. 

“Expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we’re known for,” she wrote. “The process may not always be easy or pretty, but we’re going to continue embracing it — and any new tech that we believe makes life better.”

More on CNET: CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors

Share This Article

Go to Source