In early August 2021, Apple announced a system that would scan iPhone and iPad photos for child sexual abuse material. Cupertino’s announcement set off a wave of outcry against the privacy invasion. Even Apple’s own employees expressed concern. Cupertino insists that these concerns are rooted in “misunderstanding,” but one team of researchers disagrees. You see, these folks from Princeton already know firsthand the dangers of CSAM.

They’ve Already Built a CSAM System Like Apple’s, and Didn’t Like It

Cupertino insists that the people concerned about its scanning technology don’t understand how it works. These researchers can honestly say that’s not true. Two years ago, Jonathan Mayer, and Anunay Kulshrestha began researching and developing a system to scan photographs for child sexual abuse material. Mayer is an assistant professor of computer science and public affairs at Princeton University. Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.

The team wrote the only peer-reviewed publication (so far, anyways) on how to build a system like the one Apple is using, and their conclusion was that the technology is dangerous. Mayer and Kulshrestha aren’t worried about Cupertino’s plans because of a misunderstanding. They are worried because they’ve already done this and fully understand how the system works. They don’t just think there are dangers of CSAM, they know.

The Reason to Build CSAM Scanning Tech

These two researchers are deeply involved in computer security, and they understand the value of end-to-end encryption. They are also horrified that child sexual abuse material has become so prolific on encrypted platforms. They worry that online services are “reluctant to use encryption without additional tools to combat CSAM”.

With those worries in mind, they tried to find a middle ground. They attempted to develop a way that online services could identify harmful content. At the same time, they sought to preserve the encryption’s security. They had a very straightforward concept: if someone shared material that matched a database of known harmful content, the service would find out about it.

On the other hand, if the content was innocent, nobody would know it even existed. Nobody could read the database or learn whether the content matched, because that information could reveal law enforcement methods and help criminals evade detection.

The Inherent Dangers of CSAM Scanning Technology

Other researchers and academics said this kind of system was not feasible. Even so, the team pressed on. Eventually, they had a working prototype. In the process, they encountered a glaring problem. Third parties could, they found, use their technology for other, more nefarious, purposes. There were dangers in CSAM scanning technology could couldn’t be easily avoided.

Mayer and Kulshrestha realized that others could easily repurpose their technology for other tasks. The design isn’t restricted to just one type of content. That means anyone with the right knowledge can swap out the content-matching database. People using the service would be none the wiser, and boom. Snoopers would have their own version of the scanning system for other forms of surveillance and censorship.

A government could use this technology to uncover people sharing political speech they did not agree with. In fact, the Chinese government is already doing just that in social media app WeChat. India has recently enacted rules that could make use of this technology. Russia has recently fined Google, Facebook, and Twitter for not removing pro democracy protest content.

A Probable First in Computer Science Literature: Warning Against Their Own Design

Mayer and Kulshrestha did something likely unheard of in computer science research. They actually warned against using their system design, insisting that there needed to be further research on how to alleviate the dangers of CSAM scanning.

The two had planned to discuss further options at an academic conference in August, but Cupertino beat them to the punch. Apple announced it would deploy a system nearly identical to the one the Princeton researchers had already developed on iCloud photos. Unfortunately, they found that Apple did so without answering the difficult questions discovered during earlier research.

By Ignoring the Dangers of SCAM Scanning, Has Apple Reversed Its Own Privacy Stance?

In an editorial at The Washington Post, the Princeton team points out the same apparent question that has been on my mind since day one. Is Apple really reversing course on its privacy stance?

After the 2015 terrorist attack in San Bernardino, California, the Justice Department tried to compel Apple to help it access a perpetrator’s encrypted iPhone. Apple refused, citing concerns about how others might use (or abuse) that capability. At the time, Cupertino explained, “It’s something we believe is too dangerous to do. The only way to guarantee that such a powerful tool isn’t abused… is to never create it”.

So, in light of the more recent events and quotes from Apple, I am left wondering one thing. Has Apple’s stance on privacy changed? Is Cupertino no longer concerned about how others might abuse its technology? Apple insists our concerns are based on misunderstanding. I can’t help but think Cupertino is deliberately gaslighting us.

Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

6 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
W. Abdullah Brooks, MD

Jeff: Very well written piece. And you’ve asked the right question; ‘Has Apple’s stance on privacy changed?’  The WAPO editorial written by Jonathan Mayer and Anunay Kulshrestha concludes what the prior piece written by the WAPO editorial board concludes, namely that ‘Apple’s new child safety tool comes with security trade-offs, just like all the others’. This does not answer your question.  And neither does this. In their peer-reviewed paper, Mayer and Kulshrestha conclude  ‘…we emphasize that the goal of this project is to contribute technical analysis about a possible path forward that maintains the benefits of E2EE (end to end encryption)… Read more »

John Kheit

Great article Jeff. One thing everyone is missing here that is way more important than what the researchers caution over. They caution that the hashes can be changed to look for other things. Like China could make hashes that look for political speech or even dissidents. True, and awful. But there is a far bigger gaping hole problem here. THERE IS A PROCESS RUNNING ON YOUR PHONE THAT READS YOUR FILES to create these hashes. Right now that process is limited to just generating hashes for matching, but it could be changed at any time to OUTRIGHT read your files… Read more »

John Kheit
geoduck

I agree. Cupertino KNOWS what the dangers are. They KNOW how it will be used by Russia, china, heck the US would like to have this tool at their fingertips. It will be abused. There will be false positives. There will be planted images to set off alarms and smear people’s reputation. It is not IF but WHEN. The really sad part is this will do NOTHING to stop CSAM material. Once encrypted with commonly available tools, than anything is just a file and can be sent without the carrier being any the wiser. This will severely damage Apple’s reputation,… Read more »