Microsoft, Adobe, and other big names have pledged to add metadata to their AI-generated images so that future compatible apps will flag them up as machine-made using a special symbol.
The goal is to provide a way for people to see if a picture was model or human-made and how. The symbol, described as an “icon of transparency,” depicts the lower case letters “cr” inside a speech-mark like bubble. It was created by the Coalition for Content Provenance and Authenticity (C2PA).
C2PA’s Content Credentials metadata can be used for any picture, but it is particularly useful for AI-generated images. The metadata includes information about the source of the image, the AI model used to generate it, and the time and date of creation.
Microsoft and Adobe have promised to include Content Credentials metadata in their AI image generators at some point in the future. This means that users will be able to see if a picture was generated by AI simply by looking for the “cr” symbol.
This is an important step in the fight against deepfakes, which are AI-generated videos or images that are designed to look like they are real. Deepfakes can be used to spread misinformation or to damage someone’s reputation.
The sources for this piece include an article in TheRegister.