The Pixel 8 and the what-is-a-photo apocalypse

/

With the Pixel 8, Google has turned the question of ‘what is a photo’ right on its head.

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Photo of bay blue Pixel 8 Pro in hand

Photo by Allison Johnson / The Verge

One of the first known photo fakes, a portrait of Abraham Lincoln, was made just decades after the dawn of photography itself. Since then, photographers have found themselves in endless arguments about what truly constitutes a photo — what’s real, what’s fake, and when is editing too much? Now, as we head into an era where AI-powered tools are everywhere and easily accessible, the discussion is going to be messier than ever. And with the Pixel 8, Google has turned the question of “what is a photo” right on its head.

Google has been leading smartphone photography down this path for many years now. The company pioneered the concept of computational photography, where smartphone cameras do a huge amount of behind-the-scenes processing to spit out a photo that contains more detail than the camera sensor can detect in a single snap. Most modern smartphones use a system like Google’s HDR Plus technology to take a burst of images and combine them into one computationally-created picture, merging highlights, shadows, details, and other data to deliver a more pristine photo. It’s accepted practice at this point, but it also means that a baseline smartphone photo is already more than just “a photo” — it’s many of them, with their best parts combined.

The Pixel 8 lineup complicates things further by starting to transform how much a photo can be easily changed after the picture is snapped. It presents easy-to-use editing tools powerful enough to create a completely different image from the original photo you recorded when you hit the shutter button, and those tools are marketed as integral parts of the phone and camera. Photo editing tools have existed since the beginning of photography, but the Pixel 8 blurs the line between capture and editing in new and important ways.

This starts with Magic Eraser, a two-year-old feature that Google has overhauled with generative AI for the Pixel 8 Pro. The original version could remove unwanted items from images by “blending the surrounding pixels” — that is, taking what’s already there and smudging it to hide small objects and imperfections. This upgraded version “generates completely new pixels” using generative AI, according to Google hardware leader Rick Osterloh; the result is no longer simply your photo but your photo plus some AI-assisted painting. In one example, Google showed how the tool could seamlessly remove an entire car and fill in details like wooden slats behind it. In another image, Google used the new Magic Eraser to basically Thanos snap two people into oblivion and fill in the horizon behind them.

The Pixel 8 also debuts a reality-defying tool called Best Take, which tries to solve the problem of somebody blinking in a photo by letting you swap in their face from another recent image. It looks like it might work well; based on what I saw from our tests at Google’s event, it can do some seamless face swaps

And then there’s the big one: Magic Editor. First announced at Google I/O in May, Magic Editor uses generative AI to help you adjust entire parts of the photo in some dramatic ways. You can move a person so that they are in a better position just by tapping and dragging them around. You can resize that person with a pinch. You can even use Magic Editor to change the color of the sky.

Where Magic Eraser and Best Take are more about “correcting” photos — fixing blinks and strangers wandering through — Magic Editor fully goes down the road of “altering” a photo: transforming reality from an imperfect version to a much cooler one. Take two examples from a Google video. In one, somebody edits a photo of a dad tossing a baby in the air to move the baby up higher. Another shows somebody leaping for a slam dunk at a basketball hoop but then removing the bench the person used to get the height for the jump. 

There’s nothing inherently wrong with manipulating your own photos. People have done it for a very long time. But Google’s tools put powerful photo manipulation features — the kinds of edits that were previously only available with some Photoshop knowledge and hours of work — into everyone’s hands and encourage them to be used on a wide scale, without any particular guardrails or consideration for what that might mean. Suddenly, almost any photo you take can be instantly turned into a fake.

There are ways for others to tell when Pixel photos have been manipulated, but they’ll have to go looking for it. “Photos that have been edited with Magic Editor will include metadata,” Google spokesperson Michael Marconi tells The Verge. Marconi adds that “the metadata is built upon technical standards from [International Press Telecommunications Council]” and that “we are following its guidance for tagging images edited using generative AI.”

In theory, that all means that if you see a Pixel picture where the baby seems to be too high in the air, you’ll be able to check some metadata to see if AI helped create that illusion. (Marconi did not answer questions about where this metadata would be stored or if it would be alterable or removable, as standard EXIF data is.) Google also adds metadata for photos edited with Magic Eraser, Marconi says, and this applies to older Pixels that can use Magic Eraser, too.

Using Best Take does not add metadata to photos, Marconi says, but there are some restrictions on the feature that could prevent it from being used nefariously. Best Take does not generate new facial expressions, and it “uses an on-device face detection algorithm to match up a face across six photos taken within seconds of each other,” according to Marconi. It also can’t pull expressions from photos outside that timeframe; Marconi says the source images for Best Take “requires metadata that shows they were taken within a 10-second window.”

Small alterations can unambiguously improve a photo and better define what you’re trying to capture. And groups that care a lot about photo accuracy have already figured out very specific rules about what kinds of changes are okay. The Associated Press, for example, is fine with “minor adjustments” like cropping and removing dust on camera sensors but doesn’t allow red eye correction. Getty Images’ policy for editorial coverage is to “strict avoidance of any modifications to the image,” CEO Craig Peters tells The Verge. Organizations like the Content Authenticity Initiative are working on cross-industry solutions for content provenance, which could make it easier to spot AI-generated content. Google, on the other hand, is making its tools dead simple to use, and while it does have principles for how it develops its AI tools, it doesn’t have guidelines on how people should use them.

The ease of use of generative AI can be bad, Peters argued last month in a conversation with The Verge’s editor-in-chief, Nilay Patel. “In a world where generative AI can produce content at scale and you can disseminate that content on a breadth and reach and on a timescale that is immense, ultimately, authenticity gets crowded out,” Peters said. And Peters believes companies need to look beyond metadata as the answer. “The generative tools should be investing in order to create the right solutions around that,” he said. “In the current view, it’s largely in the metadata, which is easily stripped.”

Currently, we’re at the beginning of the AI photography age, and we’re starting off with tools that are simple to use and simple to hide. But Google’s latest updates make photo manipulation easier than ever, and I’d guess that companies like Apple and Samsung will follow suit with similar tools that could fundamentally change the question of “what is a photo?” Now, the question will increasingly become: is anything a photo?

Go to Source