MIT researchers develop PhotoGuard to protect images from AI manipulation

Share post:

MIT researchers have developed a new technique called PhotoGuard that can protect images from malicious AI manipulation.

PhotoGuard works by adding imperceptible perturbations to images that disrupt the ability of AI models to edit them. The result is an image that is visually unchanged for human observers but is protected from unauthorized editing by AI models.

PhotoGuard employs “adversarial perturbations” to safeguard images from unauthorized manipulation by models like DALL-E and Midjourney. These subtle changes in pixel values are imperceptible to the human eye but can be detected by computer models, thwarting AI’s ability to effectively alter images. The tool employs two attack methods: the “encoder” attack disrupts the AI model’s latent representation of an image, generating irrelevant or unrealistic outcomes, while the “diffusion” attack aims to resemble a specific target image, even disturbing the text prompt conditioning process.

Hadi Salman, lead author of the paper and a PhD student at MIT, explains that PhotoGuard adds an additional layer of protection to images, making them immune to manipulation by diffusion models. By introducing imperceptible pixel modifications before uploading an image, users can immunize it against modifications and potential misuse. PhotoGuard was effective at preventing them from editing images and did not significantly degrade the quality of the images.

The work was supported by the U.S. Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF).

The sources for this piece include an article in AnalyticsIndiaMagazine.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Toyota AI teaches robots to make breakfast

Toyota Research Institute (TRI) has used generative AI to teach robots to make breakfast, or at least, the...

Google’s Bard chatbot gets new features

Google's Bard chatbot has received a major update that gives users the ability to double-check its answers and...

Microsoft AI researchers accidentally leak 38TB of data

Microsoft AI researchers accidentally leaked 38TB of sensitive data, including backups of personal information belonging to Microsoft employees....

Tech giants call for regulation of artificial intelligence

Tech giants such as Tesla, Meta, Google, and Microsoft have called for regulation of artificial intelligence (AI), following...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways