An AI Site Ripped Off Our Reporting About AI Ripoffs

Today in AI ouroboros vertigo: our reporting about AI rip-offs was ripped off by a spammy AI site.

In November, we published a story revealing that the legacy American magazine Sports Illustrated had published articles bylined by fake, AI-generated authors. Afterward, on a platform dubbed “Toolify.ai,” an outfit called “Curiosity” published two separate AI-generated articles — one titled “The Controversy of AI-Generated Authors: Sports Illustrated’s Shocking Revelation,” the other titled “The Scandal Unveiled: AI-Generated Authors Exposed in Sports Illustrated” — rehashing our report’s findings without credit.

“This article delves into the controversy surrounding AI-generated authors and explores the case of Sports Illustrated, a renowned magazine,” reads one of the posts, “where AI-generated authors were published without disclosure or acknowledgment.”

The creators of these articles even threw in an accompanying YouTube video in which a tinny, seemingly AI-generated voice reads our reporting word-for-word. (The YouTube account is just a plagiarism engine that’s verbatim-copied many Futurism articles.) And speaking of publishing things without disclosure or acknowledgment: Futurism is never named nor linked to in either article. We are mentioned at the very end of the YouTube video, but only because the AI narrator read a disclaimer we’d included at the foot of our report.

Of course, this sort of plagiarized spam is far from uncommon. NewsGuard has been closely tracking the rise of AI content farms like this, which are already gleaning income from ad revenue; in other concerning news, a recent though yet-to-be-peer-reviewed paper from researchers at Amazon Web Services found that lower-resource language areas of the internet are already overrun by low-quality AI content.

And while some of this AI-spawned sludge likely won’t have much success in search and news algorithms, that’s not always true. A recent 404 Media report found that Google News is struggling to keep AI-generated swill out of its results.

In this case, according to Ahrefs, Toolify has a domain authority score of about 57. That’s pretty good as far as SEO goes — and as mass-produced AI-generated spam continues to fill the web, it’s exactly this kind of decent-enough-domain-authority junk that stands to overwhelm search engines and generally erode the quality of the web.

The AI-spun versions of our Sports Illustrated reporting is also factually questionable. For example, one of the posts claims that an “internal investigation” found that AI was used in the generation of its fake author-bylined content. An internal investigation would suggest that either Sports Illustrated or its owner, The Arena Group, conducted an internal inquiry and found wrongdoing. But that’s not true: our story came out first, and in response The Arena Group quickly denied that AI was used to generate its fake author-bylined content, citing a questionable “initial investigation.”

That this particular AI-swindled article happens to be about AI trickery adds an extra layer of absurdity to the garbled mess that the burgeoning AI era threatens. And to that end, it feels like a prescient example of one of the many reasons why publishers are seeking new protections and compensation systems as AI models continue to gobble up the work of journalists and artists — not to mention the musings and updates of pretty much anyone who’s ever been online — and regurgitate it.

What’s more, these are just two of many thousands of articles published under Toolify’s “AI News” tab, and other posts wade much further into misinformation territory. This article about Britney Spears’ 2023 wedding, for example, makes many deeply speculative claims supporting the TikTok conspiracy that the pop star’s nuptials were a sham.

But despite making claims such as “Britney Spears’ wedding remains shrouded in controversy and uncertainty” and “the involvement of celebrities like Selena Gomez and Donatella Versace has added to the intrigue, with some suggesting hidden agendas or ulterior motives,” the AI-spun article never offers any legitimate evidence. Rather, its claims appear to be drawn from a YouTube channel that hawks oft-conspiratorial celebrity tabloid tales — so, in other words, not exactly a reliable source. (Indeed, Toolify’s AI strongly appears to be scraping YouTube for content and spinning it into explainer articles en masse.)

That all said, it’s not like any of this “news” looks particularly trustworthy. It mostly reads pretty clickbait-y, and we would hope that no one takes anything that Toolify platforms at face value. Still, it’s hard not to find the irony — and rest assured, we’ll be keeping an eye out for Toolify’s very meta remix of this blog. Catch you at the end of the dying web as we know it!

More on AI and the internet: SEO Spammers Are Absolutely Thrilled Google Isn’t Cracking Down on CNET’s AI-Generated Articles

Share This Article

Go to Source