TLDR/Key Points

        1. Apple announced two new features: communication safety in Messages and CSAM detection in iCloud Photos
        2. Apple’s response to critics: no substantial risks to children and end-to-end encryption is preserved
        3. Apple should have anticipated fallout and proactively addressed likely concerns
        4. There are available mitigation measures to potential exploits of concern
        5. These mitigation measures should be tested and refined during this initial limited rollout

The Electronic Frontier Foundation (EFF) recently issued a statement arguing that Apple was reversing its stance against creating an encryption back door. This, the EFF argued, was being done in order to address the admittedly serious problem of child exploitation, but that by doing so, Apple’s approach was ‘opening the door to broader abuse,’ including of the very vulnerable children Apple purported to protect, as well as by malign actors to exploit this system and defeat encryption at will. 

Apple Shrugged

It took a weekend, but then Apple spoke. It appears that Orwellian dystopia was not rampaging the quiet countryside after all, and that individual privacy, and indeed reason (trolling both Shelley and Rand), might still breathe. In fact, Apple’s FAQ substantially changed the focus of this piece. Apart from TMO’s own Andrew Orr’s, there have been many deep dives into what this technology is and is not, how it works, what Apple has said in response to queries, and what remain of plausible potential exploits. Given the volume of authoritative explanation, we will not rehash these here, except to highlight key issues and Apple’s rebuttal, before discussing the Apple community’s response, as well as options for legitimate threat mitigation. 

Briefly, there are two distinct technologies being rolled out, initially in iOS and iPadOS 15, watchOS 8 and later, in macOS Monterey, and only available in the USA. It is less clear if/when these will become available in other countries, but a limited initial rollout conforms to best practices of due diligence prior to wider distribution. 

Communication Safety

The first is a new feature in Messages, Communication Safety, and ‘…is only available for family accounts set up in iCloud.’ Parent or guardian accounts must opt-in for the feature, which only affects family accounts of users less than or equal to 12 years. This feature notifies the child if they are receiving/sending adult content, which initially appears blurred, and warns them that if they proceed then their parents will be notified. It does not notify the parents if the children are older than 12 years. Apple is not, and does not wish to be, aware of these communications, nor does it notify law enforcement. The feature does not break end to end encryption (see Apple FAQ), but the feature does provide abuse victims, and those who know victims, with information on how to seek help.

This was the less controversial of the two notifications. Some have noted that gender-non-conforming children might be outed by this feature, however this would apply only to those on an enabled family account, and even then, it would only work in Messages for children 12 years and younger. Others have already pointed out that there are plenty of third party apps, commonly used by their peers, where such children can and do freely communicate without parental oversight. 

CSAM, Photos and iCloud

The second feature was Apple’s approach to ‘Child Sexual Abuse Material’ (CSAM), which is a separate feature set and technology. This feature only applies to photos that a user attempts to upload to iCloud, Apple’s proprietary cloud storage service, for which they have responsibility (and liability).

Apple states, ‘The system does not work for users who have iCloud Photos disabled. This feature does not work on your private iPhone photo library on the device.’ Importantly, Apple is not ‘scanning’ ANY actual photos on your device. Rather, the company is using unreadable neural hashes (strings of numbers and mathematical equations) that are stored on the device and that correspond to known CSAM images to prevent such images being uploaded to iCloud. In its FAQ, the company said, ‘Apple will not learn anything about other data stored solely on device.’ Thus, ‘scanning’, as is commonly understood, is not being done.

Craig Federighi explained that this gets wrapped into a ‘safety voucher’, which contains the match result, and that this is what gets loaded into iCloud. Further maths are done in iCloud on the voucher contents, and only if a ‘threshold’ is reached on the quantity of the content does ‘anything get learned,’ at which point, a human gets involved on just that content.

Further, ‘There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC’ (the National Center for Missing and Exploited Children). Apple stated that non-CSAM images will not be flagged, nor can they be added to the system, and even if a country tried to include other imagery, human-level review would screen these out and these would not be reported. There is more about what happens in iCloud at the links to Andrew’s and Rene Ritchie’s discussions.

Next: Context is King

Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Lee Dronick

“Unfinished business.” Well it is a start

Jeff Butts

To paraphrase a line from Enemy of the State, who’s going to shepherd the shepherds?

Great analysis, Dr. Brooks. As I mentioned earlier, my own internal jury is still out on this subject, but I’m certainly not impressed by or pleased with Cupertino’s response to the discussion, queries, and concerns.