Dangers of CSAM Scanning: Princeton Researchers Already Warned Us

Dangers of CSAM Scanning

In early August 2021, Apple announced a system that would scan iPhone and iPad photos for child sexual abuse material. Cupertino’s announcement set off a wave of outcry against the privacy invasion. Even Apple’s own employees expressed concern. Cupertino insists that these concerns are rooted in “misunderstanding,” but one team of researchers disagrees. You see, these folks from Princeton already know firsthand the dangers of CSAM.

They’ve Already Built a CSAM System Like Apple’s, and Didn’t Like It

Cupertino insists that the people concerned about its scanning technology don’t understand how it works. These researchers can honestly say that’s not true. Two years ago, Jonathan Mayer, and Anunay Kulshrestha began researching and developing a system to scan photographs for child sexual abuse material. Mayer is an assistant professor of computer science and public affairs at Princeton University. Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.

The team wrote the only peer-reviewed publication (so far, anyways) on how to build a system like the one Apple is using, and their conclusion was that the technology is dangerous. Mayer and Kulshrestha aren’t worried about Cupertino’s plans because of a misunderstanding. They are worried because they’ve already done this and fully understand how the system works. They don’t just think there are dangers of CSAM, they know.

The Reason to Build CSAM Scanning Tech

These two researchers are deeply involved in computer security, and they understand the value of end-to-end encryption. They are also horrified that child sexual abuse material has become so prolific on encrypted platforms. They worry that online services are “reluctant to use encryption without additional tools to combat CSAM”.

With those worries in mind, they tried to find a middle ground. They attempted to develop a way that online services could identify harmful content. At the same time, they sought to preserve the encryption’s security. They had a very straightforward concept: if someone shared material that matched a database of known harmful content, the service would find out about it.

On the other hand, if the content was innocent, nobody would know it even existed. Nobody could read the database or learn whether the content matched, because that information could reveal law enforcement methods and help criminals evade detection.

The Inherent Dangers of CSAM Scanning Technology

Other researchers and academics said this kind of system was not feasible. Even so, the team pressed on. Eventually, they had a working prototype. In the process, they encountered a glaring problem. Third parties could, they found, use their technology for other, more nefarious, purposes. There were dangers in CSAM scanning technology could couldn’t be easily avoided.

Mayer and Kulshrestha realized that others could easily repurpose their technology for other tasks. The design isn’t restricted to just one type of content. That means anyone with the right knowledge can swap out the content-matching database. People using the service would be none the wiser, and boom. Snoopers would have their own version of the scanning system for other forms of surveillance and censorship.

A government could use this technology to uncover people sharing political speech they did not agree with. In fact, the Chinese government is already doing just that in social media app WeChat. India has recently enacted rules that could make use of this technology. Russia has recently fined Google, Facebook, and Twitter for not removing pro democracy protest content.

A Probable First in Computer Science Literature: Warning Against Their Own Design

Mayer and Kulshrestha did something likely unheard of in computer science research. They actually warned against using their system design, insisting that there needed to be further research on how to alleviate the dangers of CSAM scanning.

The two had planned to discuss further options at an academic conference in August, but Cupertino beat them to the punch. Apple announced it would deploy a system nearly identical to the one the Princeton researchers had already developed on iCloud photos. Unfortunately, they found that Apple did so without answering the difficult questions discovered during earlier research.

By Ignoring the Dangers of SCAM Scanning, Has Apple Reversed Its Own Privacy Stance?

In an editorial at The Washington Post, the Princeton team points out the same apparent question that has been on my mind since day one. Is Apple really reversing course on its privacy stance?

After the 2015 terrorist attack in San Bernardino, California, the Justice Department tried to compel Apple to help it access a perpetrator’s encrypted iPhone. Apple refused, citing concerns about how others might use (or abuse) that capability. At the time, Cupertino explained, “It’s something we believe is too dangerous to do. The only way to guarantee that such a powerful tool isn’t abused… is to never create it”.

So, in light of the more recent events and quotes from Apple, I am left wondering one thing. Has Apple’s stance on privacy changed? Is Cupertino no longer concerned about how others might abuse its technology? Apple insists our concerns are based on misunderstanding. I can’t help but think Cupertino is deliberately gaslighting us.

6 thoughts on “Dangers of CSAM Scanning: Princeton Researchers Already Warned Us

  • Jeff:

    Very well written piece. And you’ve asked the right question; ‘Has Apple’s stance on privacy changed?’ 

    The WAPO editorial written by Jonathan Mayer and Anunay Kulshrestha concludes what the prior piece written by the WAPO editorial board concludes, namely that ‘Apple’s new child safety tool comes with security trade-offs, just like all the others’. This does not answer your question. 

    And neither does this. In their peer-reviewed paper, Mayer and Kulshrestha conclude 

    ‘…we emphasize that the goal of this project is to contribute technical analysis about a possible path forward that maintains the benefits of E2EE (end to end encryption) communications while addressing the serious societal challenges posed by harmful media. We do not take a position on whether E2EE services should implement the protocols that we propose, and we have both technical and non-technical reservations ourselves. But we encourage the information security community to continue its earnest exploration of potential middle ground designs for storage and communications encryption that address ten­sions with longstanding societal priorities. There is value in understanding the space of possible designs and associated trade-offs, even if the best option is to maintain the status quo.’ 

    No position on whether E2EE services should implement their proposed protocols, but value in understanding possible designs and trade-offs. Sounds like, ‘We need more data’. 

    Their peer-reviewed paper, in addition to their editorial, is a worth a read. It sheds light, for example, on the interplay between the sensitivity and specificity of Perceptual Hash Function Predictive Performance, and why, for example, that requires Apple to employ a ‘threshold’ to their ‘safety voucher’. Importantly, it makes clear that there is not one, but multiple technologies, with distinct implementations, vulnerabilities and risk mitigation solutions. The point being that the peer-reviewed paper is more agnostic on the merits of deploying the technology than it is on addressing its limitations. 

    What is also noteworthy in the editorial by the Editorial Board that, of the 800+ logged objections/concerns in Slack by Apple employees, none were from any from employees involved in security and encryption protocols – the people involved in developing these technologies. 

    In their editorial, the researchers acknowledge that, although the concepts are the same, and face the same general threats, Apple’s system is more sophisticated than anything they devised. That is the sort of intellectual honesty we expect in academia. Indeed, their point is that, because Apple are not talking, it is not clear which of these technologies and approaches they have adopted. Thus, the community cannot assess the risk, the potential exploits, or for that matter, what mitigation measures they may/may not have designed. 

    Their editorial concludes, 

    ‘Apple is making a bet that it can limit its system to certain content in certain countries, despite immense government pressures. We hope it succeeds in both protecting children and affirming incentives for broader adoption of encryption. But make no mistake that Apple is gambling with security, privacy and free speech worldwide.’ 

    We are not going to find, in any of the written material to date, an answer to your question; certainly not one that critics and sceptics will accept. 

    Apple are being opaque at a time when people have genuine fears and want answers. Apple’s culture of secrecy, in this instance, is working against them. While they may not want to alert the bad guys to which technologies, and therefore which potential vulnerabilities, their CSAM detection technology will utilise, they appear to be applying the same broad secrecy brush to a community feeling betrayed, vulnerable and crying out for clarity. In other words, Apple’s culture of secrecy might well create not one, but two crises; implementation and outcomes depending.

    In the end, it matters less what Apple’s intent and commitments are; opinions will continue to span the gamut. Rather, because Apple have already damaged the trust of the community, they have redirected your question from whether or not their stance has changed, to what will be this technology’s impact on user privacy, apart from its intended impact on child exploitation. In the end, that is what matters.

    That question will not be answered by speculation, but by verification with hard data; precisely what these authors call for. 

  • Great article Jeff.

    One thing everyone is missing here that is way more important than what the researchers caution over. They caution that the hashes can be changed to look for other things. Like China could make hashes that look for political speech or even dissidents. True, and awful.

    But there is a far bigger gaping hole problem here. THERE IS A PROCESS RUNNING ON YOUR PHONE THAT READS YOUR FILES to create these hashes. Right now that process is limited to just generating hashes for matching, but it could be changed at any time to OUTRIGHT read your files directly. China could demand apple make the process read for anything it wants and ignore the entire hashing measure.

    This is the equivalent of spyware or a virus on your machine. And it not only theoretically can be coopted, it absolutely will be coopted. First by hackers, they can find what that process is, find a bug, and inject their own code to look for whatever they want. Second, by govt actors like china demanding apple comply, and apple has and will comply like they have done countless times in the past.

    I do not know why people are not seeing how giant a security breach the ‘always on backdoor process that reads your files to make hashes’ is.

    Let me state this again, apple’s hashing process is a process that is always running on your phone, always reading your files, and is done so without your knowledge and without your permission.

    What could possibly go wrong with process that is always reading your files without your permission. 🙄

  • I agree. Cupertino KNOWS what the dangers are. They KNOW how it will be used by Russia, china, heck the US would like to have this tool at their fingertips. It will be abused. There will be false positives. There will be planted images to set off alarms and smear people’s reputation. It is not IF but WHEN.

    The really sad part is this will do NOTHING to stop CSAM material. Once encrypted with commonly available tools, than anything is just a file and can be sent without the carrier being any the wiser.
    This will severely damage Apple’s reputation, probably permanently.
    This will NOT do a damn thing to stop CSAM on the web or in iCloud.

    1. You’re absolutely right. I struggled forming my opinion on this matter, because I am fiercely protective of children, in particular. In the end, though, I realized that this technology is just one more step towards a real-life Orwellian society.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.