ChatGPT made generative AI mainstream, with plenty of similar products having launched since OpenAI released its product late last year. Generative AI isn’t just about conversing with artificial intelligence to get answers to complex questions with just a few lines of dialogue. AI can also generate incredible images that look too good to be true. They might even look so real that we question everything we see online, as deep fakes are only going to improve.
Now that we can create incredible images with AI, we also need protections built into photos that make it harder for someone to use them to create fakes. The first such innovation is here — a software solution from MIT called PhotoGuard. The feature can stop AI from editing your photos in a believable way, and I think such features should be standard on iPhone and Android.
Researchers from MIT CSAIL detailed their innovation in a research paper (via Engadget).
PhotoGuard changes certain pixels in an image, making it impossible for the AI to see them. The feature won’t change the photo visually, at least for humans. But the AI might not be able to understand what it’s looking at.
When tasked with creating fakes using elements from these protected images, the AI won’t be able to read the pixel perturbations. In turn, the AI-generated fakes will have obvious sections that inform human viewers that the image has been altered.
The video below offers examples of using celebrities to create generative AI fakes. With the pixel protections in place, the resulting images are not perfect. They’d tell anyone looking at the photos that the images aren’t real.
The researchers came up with two protection methods that can thwart the efforts of AI. The “encoder” method makes it impossible for the AI to understand parts of the image. The “diffusion” method camouflages parts of an image as a different image for the AI. In either case, the AI won’t be able to produce a seamless fake.
“The encoder attack makes the model think that the input image (to be edited) is some other image (e.g. a gray image),” MIT doctorate student and lead author of the paper, Hadi Salman, told Engadget. “Whereas the diffusion attack forces the diffusion model to make edits towards some target image (which can also be some grey or random image).”
These sorts of protections aren’t perfect. Taking a screenshot of the image might eliminate the invisible perturbations. Still, this is the kind of feature that Apple and Google should consider adding to the stock camera apps on iPhone and Android, respectively.
Both iPhone and Android let you edit photos after you’ve taken them to create the desired effect. Recently, I criticized Google Photos for making use of AI to essentially let you take fake images.
It’s one thing to edit your own photos to make them look better. And quite another for someone to steal your face from publicly available photos for malicious endeavors involving AI.
For example, future camera/photos app experiences might include anti-AI modes that you might want to use on everything you post on social media.
That’s not to say that iPhone or Android will ever employ this particular PhotoGuard invention from MIT. But this innovation underscores the importance of developing anti-generative AI tools as fast as possible in a world where software can manipulate photos, videos, and voice and deliver believable fakes in just a matter of minutes. Apple and Google have to consider similar protections.
Meanwhile, you can test PhotoGuard yourself at this link.