Apple announced it would introduce a system that scans photos on iPhones in the U.S. before they are uploaded to its iCloud storage service to verify the photo does not contain images of child sexual abuse.
Apple’s new system aims to respond to requests from authorities to curb child sexual abuse while maintaining the privacy and security practices that are a core principle of Apple. However, several privacy advocates said the system could open the door to monitoring political speech and other content on iPhones.
Most other major technology companies including Google, Facebook and Microsoft, already scan images against a database of known child sexual abuse images.
Law enforcement officials have a database of identified images of child sexual abuse and translate these images into “hashes” – numerical codes that positively identify the image but cannot be used to reconstruct it.
Apple will implement this database with a technology called “NeuralHash,” which is designed to capture processed images that are very similar to the originals. This database is stored on iPhones. As soon as a user uploads an image to Apple’s iCloud storage, the iPhone creates a hash of the image to be uploaded and compares it with the database.
Photos stored only on the phone are not checked, and human verification before bringing an account to the attention of authorities is intended to ensure the matches are legitimate before an account is suspended.
iPhone users who feel that their account has been wrongly blocked can appeal to get it back up and running.
One feature that distinguishes the Apple system is that it checks photos that are stored on the phone before they are uploaded, rather than checking the photos after they are uploaded.
Several privacy and security experts raised concerns that the system could eventually be extended to scanning phones for political content, among other things.
For more information, read the original story in Reuters.