Once again, Google’s search engines are failing to block fake, AI-generated imagery from its top search results.
Google’s inability to protect its top search results from fake, AI-generated imagery continues.
It’s been 30 years since China’s infamous Tiananmen Square Massacre, and the “Tank Man” photograph, a striking image of a protestor standing in front of a column of Type 59 tanks, remains one of the most iconic and enduring artifacts from the brutal government-civilian conflict. But already, it appears that AI is rewriting the decades-long history of the photograph; google “Tank Man” lately, and you might be met with a fake, AI-generated “Tank Man Selfie” as the platform’s top — and featured — search result.
To be clear, it doesn’t seem like the image was created as an intentional piece of misinformation. It would be one thing if whoever made it had included the AI-generated photo in, say, a blog post about Tiananmen Square. But they didn’t — as noted by 404 Media, who first reported the story, the AI-generated fake was first posted to the subreddit r/midjourney about six months ago. The user who posted it, who goes by the fitting handle Ouroboros696969 on the platform, never pretended that the image was real; after all, they posted the photo to a designated community dedicated to sharing AI-spun imagery — a point that Ouroboros696969 reiterated earlier today when they were accused by another Redditor of “spreading misinformation” on Google.
“Bro take it up with Google,” Ouroboros696969 wrote in response to the accusation. “I posted this image to a SUBREDDIT SPECIFICALLY FOR AI ART YOU DUNCE.”
Gotta say: Ouroboros696969 makes a good case. Intentions aside, though, when an image like this makes its way into a coveted top spot in Google’s search overview, which effectively functions as a defacto TL;DR for the web’s vast search results, it can well be argued that it becomes misinformation.
Google isn’t just a passive information reservoir. Through its algorithms, search overviews, and featured search snippets, it also functions as an information organizer. When there’s a flaw in the way that information is organized — for example, when Google search somehow decides that a piece of content posted to a forum where people exclusively share fake, AI-generated images is useful or quality material — that information’s context can get deeply confused.
It’s worth noting that the search giant hasn’t forbidden synthetic content from its search rankings entirely, stating in its AI policy, as 404 points out, that “Google’s ranking systems aim to reward original, high-quality content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness.” This, Google says, is part of its “focus on the quality of content, rather than how content is produced,” adding that EEAT “is a useful guide that has helped us deliver reliable, high-quality results to users for years.”
“However content is produced, our systems look to surface high-quality information from reliable sources, and not information that contradicts well-established consensus on important topics,” Google states elsewhere in its ranking system policy. “On topics where information quality is critically important — like health, civic, or financial information — our systems place an even greater emphasis on signals of reliability.”
Forgive us, but “Tank Man Selfie,” taken from a fake image depository that just about anyone can post to, doesn’t exactly seem to qualify as EEAT-level material. And to that end, the sheer lack of nuance that Google’s algorithms displayed here doesn’t induce much confidence that the search giant’s systems will be able to parse through more complex cases of phony AI material.
Google does appear to be fixing the problem. In our testing, the selfie only showed up on certain browsers and devices, as if the algorithms were grappling with a search demotion in real time. When we reached out, a spokesperson for Google confirmed that action had been taken to “remove the image from the Search feature,” as its “policies for this feature don’t allow inaccurate content on public interest topics like this.”
“On Google Search, we build our systems to show helpful and high-quality information, while giving users the tools that they need to make sense of what they find online,” the spokesperson added. “Given the scale of the open web, however, it’s possible that our systems might not always select the best images regardless of how those images are produced, AI-generated or not.”
Even so, the game of cat-and-mouse that Google seems to be playing with AI-generated material in its search results feels dysfunctional. We’re still at the cusp of generative AI’s impact on our digital world and the information webs that run within it — and if Google’s algorithms can’t keep up, the usability of the entire web will surely suffer.
More on AI-generated content: Microsoft Publishes Garbled AI Article Calling Tragically Deceased NBA Player “Useless”
Share This Article