Click to Skip Ad
Closing in...

Apple finally responds to criticism of controversial iPhone photo-scanning feature

Published Aug 9th, 2021 7:31AM EDT
Apple Photo Scanning
Image: Christian de Looper for BGR

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Apple announced a few days ago a series of security initiatives meant to protect children from online predators. The new features will be available in iMessage and iCloud Photos soon. One feature should help parents protect their children from abuse and offer younger iOS device users a warning about sensitive content they might receive via iMessage. Separately, Apple will scan photos stored in iCloud to see whether they match known Child Sexual Abuse Material (CSAM) material. Both these features will not break end-to-end encryption or user privacy. Yet some people started questioning Apple’s new photo scanning abilities. They wondered whether someone could abuse the new technologies to spy on content stored on iPhone or in the cloud. Apple has already addressed the concerns, releasing a FAQ document that covers both new features.

Apple explained in the document that the two technologies it announced last week are different. One concerns the iPhone’s ability to scan iMessage content for sensitive content sent to children. That technology is different than the tech Apple developed to compare images stored in iCloud Photos against hashes of known CSAM content.

How scanning iMessage works

The new communication safety feature in Messages will not break the strong end-to-end encryption. The feature is only available for accounts set up families in iCloud. Parents must turn on the feature for their group, to protect their children. Also, the feature works only for child accounts age 12 or younger.

The scanning of iMessage photos happens on-device. Apple, the National Center for Missing and Exploited Children (NCMEC), or law enforcement agencies do not get notified. The scanning of iMessage content isn’t the same as Apple scanning iCloud Photos.

If the system detects a sexually explicit image inside iMessages, parents can be notified. That’s only if a child under 12 has confirmed and sent the photo or viewed it. The information exchange and the steps included in it do not reach Apple.

If the child is over the age of 12, they’ll still get notifications asking them if they want to send the images to their parents. But parents will not be notified automatically.

What iCloud Photos can Apple scan?

The scanning iCloud Photos is the second anti-CSAM feature that Apple explained in the new document. Apple says that Apple doesn’t scan photos stored locally on iPhone or iPad. Instead, the algorithm only looks at images stored in iCloud Photos, which are the images that the user decides to upload there.

Apple doesn’t look for anything other than photos that might match known CSAM material. Apple won’t compare user photos to CSAM photos. Instead, it’ll compare the user images against unreadable hashes created for known CSAM photos — Apple explains:

These hashes are strings of numbers that represent known CSAM images, but it isn’t possible to read or convert those hashes into the CSAM images they are based on. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. Using new applications of cryptography, Apple is able to use these hashes to learn only about iCloud Photos accounts that are storing collections of photos that match to these known CSAM images, and is then only able to learn about photos that are known CSAM, without learning about or seeing any other photos.

If any iCloud photos match CSAM, Apple will inspect the scanned images to see whether they’re actually CSAM content. If that’s the case, Apple will report them to the NCMEC.

Apple won’t spy on iPhone users

The company makes it clear that the new technologies won’t be used to spy on iPhone and iPad users. Apple is only looking for CSAM content, just like other companies. And it’s doing it in a way that’s secure and private.

Apple also explains that it won’t cave to governments asking the company to add non-CSAM images to the hash list:

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMECand other child safety groups. We have faced demands to build and deploy government-man- dated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limit- ed to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

The company added that non-CSAM images couldn’t be “injected” into an iPhone or iPad.

The full document is available at this link, and it’s worth a read in full.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.