Last year, OpenAI boasted about a seismic change to its flagship ChatGPT: the chatbot could “now browse the internet to provide you with current and authoritative information, complete with direct links to sources.”
In theory, this is probably a good idea. AI systems like ChatGPT are notorious for making stuff up and ripping off original authors without giving credit, so it makes sense to show where the AI is pulling info from.
But in reality, ChatGPT’s sources are often abysmal. When we quizzed ChatGPT-4o about current events, for instance, it repeatedly cited a sloppy scam news site that deluges the user with fake software updates and virus warnings.
Asked about the life and death of the late William Goines — a Bronze Star and a Navy Commendation Medal recipient who in the early 1960s became the first Black member of the modern-era Navy SEALs — ChatGPT ignored obituaries published by The New York Times and the Washington Post to instead promote an unknown site called County Local News.
If you actually visit the County Local News story recommended by ChatGPT — though we strongly recommend that you don’t — it’ll bring up malicious popups impersonating updates for Adobe Flash Player and other software.
Click the fake update and things get even worse, with the site going full-screen and showing a storm of phony virus notifications using the branding of the antivirus company McAfee.
And if you’re foolish enough to allow notifications from the site, it’ll even start harassing you on your desktop.
In other words, we tried to use ChatGPT as a web-searching news tool — and were sent directly to a scam-ridden, AI-generated slop farm that showed us fake software updates and virus notifications. (ChatGPT also recommended County Local News when asked for information on topics as diverse as the Rodney Vicknair trial and the actress Diane Keaton.)
Mark Stockley, a cybersecurity expert at the anti-malware company Malwarebytes, reviewed Country Local News and said the worst-case scenario from these types of malicious notifications is that users could be tricked into downloading a “Potentially Unwanted Program (PUP), a type of software that they probably don’t want, that might be annoying or hard to remove,” adding that the “PUP might be what they meant to download, a download that’s different from the one they were expecting, or additional unwanted downloads alongside the one they were expecting.”
“In the last 18 months, we have seen a huge surge in malicious advertising (malvertising) as a vector for spreading malware,” he told Futurism. “Criminals take out ads on legitimate ad networks to lure people to fake websites and trick them into downloading malware, thinking it’s a legitimate program.”
“Malvertising mimics well-known brands and is extremely hard to spot,” Stockley continued, “and the criminals who do it are able to abuse the ad networks’ sophisticated targeting controls to make sure that people see fake ads for things they are likely to want.”
To a skeptical human reader, County Local News is obviously covered in red flags. Its design is amateurish and its articles are a word soup of pink slime journalism that sometimes still include chunks of clearly AI-generated responses, like “Norfolk Shooting Update : Please provide more context or clarification for the term ‘identified in’ so that I can generate a relevant response.” It’s even been flagged multiple times by the misinformation watchdog group NewsGuard, which previously discovered OpenAI’s GPT-4 citing a County Local News article pushing false, AI-spun claims related to Israel Prime Minister Benjamin Netanyahu amid Israel’s ongoing war in Gaza.
But to ChatGPT, this AI-generated chum is apparently a preferable source to the New York Times or the Washington Post — giving County Local News an air of legitimacy to ChatGPT users who trust its judgment.
And if those users ask ChatGPT to evaluate County Local News‘ credibility, it won’t be much help. Ask it to assess the site’s trustworthiness and it sometimes gives a milquetoast answer about how its “reliability can be better assessed by cross-referencing its reports with more established sources and considering the transparency of its editorial practices.” Other times, it warns of the site’s “publication of misleading information and failure to provide proper sourcing for its claims.”
A spokesperson for McAfee, the antivirus company impersonated by the ads, excoriated ChatGPT for pointing users toward scams.
“Scammers are early adopters to trending technology. ChatGPT and similar technologies are seeing massive growth creating opportunities for scammers to profit,” the spokesperson said. “Early indicators seem to suggest that users trust the output of these systems… Without further education, users may find themselves susceptible to misinformation, including scams, that find their way into these systems.”
In response to questions about this story, OpenAI provided a familiar excuse: it’ll fix ChatGPT’s citation problems in the future.
“We are committed to a thriving ecosystem of publishers and creators by making it easier for people to find their content through our tools,” the company said via a spokesperson. “Together with our news publisher partners, we’re building an experience that blends conversational capabilities with their latest news content, ensuring proper attribution — an enhanced experience still in development and not yet available in ChatGPT.”
A week later, ChatGPT was still citing County Local News, which was still showing users fake software updates and fraudulent virus scans.
More on AI search tools: There’s Something Deeply Wrong With Perplexity
Share This Article