Photography by Chris Welch / The Verge; AI-altered images made with Google’s Magic Editor.
Share this story
An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely fucking fake.
Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is all but certain to become the norm, with similar features already available on competing devices and rolling out on others in the near future. When a smartphone “just works,” it’s usually a good thing; here, it’s the entire problem in the first place.
Photography has been used in the service of deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.) But it would be disingenuous to say that photographs have never been considered reliable evidence. Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth. A staged scene with movie effects, a digital photo manipulation, or more recently, a deepfake — these were potential deceptions to take into account, but they were outliers in the realm of possibility. It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph. Fake was the exception, not the rule.
If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express. There was no reason to express why these photos matter, why they are so pivotal, why we put so much value in them. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes.
This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.
No one earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside?
And up until now, the onus has largely been on some denying the truth of a photo to prove their claims. The flat-earther is out of step with the social consensus not because they do not understand astrophysics — how many of us actually understand astrophysics, after all? — but because they must engage in a series of increasingly elaborate justifications for why certain photographs and videos are not real. They must invent a vast state conspiracy to explain the steady output of satellite photographs that capture the curvature of the Earth. They must create a soundstage for the 1969 Moon landing.
We have taken for granted that the burden of proof is upon them. In the age of the Pixel 9, it might be best to start brushing up on our astrophysics.
For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless — an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. Consider, for instance, that the last decade has seen extraordinary social upheaval in the United States sparked by grainy videos of police brutality. Where the authorities obscured or concealed reality, these videos told the truth.
The persistent cry of “Fake News!” from Trumpist quarters presaged the beginning of this era of unmitigated bullshit, in which the impact of the truth will be deadened by the firehose of lies. The next Abu Ghraib will be buried under a sea of AI-generated war crime snuff. The next George Floyd will go unnoticed and unvindicated.
You can already see the shape of what’s to come. In the Kyle Rittenhouse trial, the defense claimed that Apple’s pinch-to-zoom manipulates photos, successfully persuading the judge to putthe burden of proof on the prosecution to show that zoomed-in iPhone footage was not AI-manipulated. More recently, Donald Trump falsely claimed that a photo of a well-attended Kamala Harris rally was AI-generated — a claim it was only possible to make because people were able to believe it.
Even before AI, those of us in the media had been working in a defensive crouch, scrutinizing the details and provenance of every image, vetting for misleading context or photo manipulation. After all, every major news event comes with an onslaught of misinformation. But the incoming paradigm shift implicates something much more fundamental than the constant grind of suspicion that is sometimes called digital literacy.
Google understands perfectly well what it is doing to the photograph as an institution — in an interview with Wired, the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it. And as photographs become little more than hallucinations made manifest, the dumbest shit will devolve into a courtroom battle over the reputation of the witnesses and the existence of corroborating evidence.
This erosion of the social consensus began before the Pixel 9, and it will not be carried forth by the Pixel 9 alone. Still, the phone’s new AI capabilities are of note not just because the barrier to entry is so low, but because the safeguards we ran into were astonishingly anemic. The industry’s proposed AI image watermarking standard is mired in the usual standards slog, and Google’s own much-vaunted AI watermarking system was nowhere in sight when The Verge tried out the Pixel 9’s Magic Editor. The photos that are modified with the Reimagine tool simply have a line of removable metadata added to them. (The inherent fragility of this kind of metadata was supposed to be addressed by Google’s invention of the theoretically unremovable SynthID watermark.) Google told us that the outputs of Pixel Studio — a pure prompt generator that is closer to DALL-E — will be tagged with a SynthID watermark; ironically, we found the capabilities of the Magic Editor’s Reimagine tool, which modifies existing photos, were much more alarming.
Google claims the Pixel 9 will not be an unfettered bullshit factory but is thin on substantive assurances. “We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so,” Alex Moriconi, Google communications manager, told The Verge in an email. “That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”
The policies are what you would expect — for example, you can’t use Google services to facilitate crimes or incite violence. Some attempted prompts returned the generic error message, “Magic Editor can’t complete this edit. Try typing something else.” (You can see throughout this story, however, several worrisome prompts that did work.) But when it comes down to it, standard-fare content moderation will not save the photograph from its incipient demise as a signal of truth.
We briefly lived in an era in which the photograph was a shortcut to reality, to knowing things, to having a smoking gun. It was an extraordinarily useful tool for navigating the world around us. We are now leaping headfirst into a future in which reality is simply less knowable. The lost Library of Alexandria could have fit onto the microSD card in my Nintendo Switch, and yet the cutting edge of technology is a handheld telephone that spews lies as a fun little bonus feature.
We are fucked.