Apple’s Proposed Child Protective Security Features: An Unfinished Business

A radioactive Chernobyl

Context is King

An essential question, when any unexpected action is taken, is why is this happening now? This case has a one-word answer: legislation. Apple and other big tech companies are not taking these actions in a vacuum. Law enforcement specifically have been asking for an ‘encryption back door’ for years, and legislators have threatened action unless Apple complies. Apple has steadfastly refused. Apple argues even now that this is not an encryption back door because hash-matching algorithms occur in situ on the device, and not in the cloud as do practically all other true photo scans by competitors, and because of human intervention at the cloud level only to ensure CSAM-specific content, leaving end-to-end encryption intact.

The Human Factor(s)

The fact that a human will determine what the actual image(s) is and whether or not it is part of the CSAM database, which only NCMEC control, is explained above. Any attempt by any party to alter that content, including insertion of non-CSAM images, will be detected and denied. Whether or not Apple takes any action, legislators are already threatening action to compel encryption backdoors, and are using child protection as a cudgel. Taking no action is not an option.

Apple, and others need to be proactive in order to credibly argue that they take this matter seriously, and by so doing, remove this piece from the chessboard. More important than self-preservation and self-interest, Apple needs to be seen as sincerely and demonstrably protecting children whilst upholding their stated commitment to privacy. More broadly; this is a growing problem in need of a solution, and a necessary and right thing to do. We have discussed before how consistency is essential to credibility. With these actions, Apple ticks that box.

Critics might rightly quibble with Apple’s solution, but what alternatives, apart from inaction, do they propose? They should publish those alternative solutions and let qualified professional opinion publicly assess their merits.

Apple Community-upon-Chernobyl 

The strong reaction, indeed, near meltdown of privacy advocates and members of the Apple user community, deserves reflection. The ferocity of the response, given the paucity of information at the time of Apple’s initial announcement, was fed by the counter-narrative of plausible exploits and vulnerabilities from critics. This could easily have been predicted by any professional trained in hazard communication. Any unavoidable threat, comprised of many potential unknown vectors, with which a population has no prior experience, and over which they have no control, will cause fear, dread and panic. That is precisely what happened.

The community is not at fault here. This was an Apple own-goal. The company should see this as a teachable moment, and conduct its own post-mortem to ensure that this is never repeated. No company, which is as interdependent as Apple are with theirs, can afford to violate the trust of their user community. This was a recipe for a non-recoverable breach of trust. Apple need to hire someone skilled in hazard communication, and run such future messages through them; never letting the marketing people over-rule the final message, although the legal team might need to finesse it. Everyone will sleep better. 

Challenges and Alternatives

Little has been said about mitigation measures; what can Apple AND the user community do to ensure that these measures do not result in the dystopic exploits voiced by critics? There are several. 

First, because AI can be hardwired-preconfigured for malfeasant exploits, using seemingly harmless instruction sets as ‘updates’ to launch exploits, Apple should continue, indeed double-down, on disallowing post-factory third party hardware upgrades. We have argued before that this, rather than simple greed, might be why Apple have been increasingly making it impossible to modify their hardware post shipment. This has become a 21st Century security feature and should remain standard practice.

Second, in any system in which risk is more than minimal, systematic monitoring and evaluation (M&E) is essential. Apple should conduct its own monitoring and performance evaluations of all devices on which these security features are enabled, employing random sampling, which could also involve user opt-in participation for devices in the wild. Additionally, Apple should permit – indeed insist on – third party independent M&E, involving stakeholders with the professional skill sets to conduct state of the art monitoring and performance evaluation of these systems post-rollout. This is a standard practice protocol for products and services that could potentially affect public safety, for which Apple’s proposed system qualifies. It needs to happen. 

Third, Apple should contract professional consultants, particularly those with intelligence backgrounds, to defeat or ‘break’ these new security systems, and any others, and then proactively harden those systems against these potential exploits before they are launched in the wild. Some version of this is likely extant already.

Finally, Apple (and other big tech) should create and publish an easily accessible reporting system (website) where anyone, including the average non-tech savvy user, can report any anomalous or observable adverse behaviour or outcome on their device or service, apart from Apple’s own responsive-opaque website. This should be a searchable public record for transparency’s sake. This might well be the first place, and the most sensitive system, for identifying new and emerging actionable threats. It should have an active alert system on the backend directly to the respective company (Apple, Google, any participating members), specifically to the person tasked with security briefs. This site should contain a page for regular (eg weekly) synthesised public reports. A dashboard would be a bonus. 

Conclusion

All of these risk-mitigation systems currently exist, can and should be adapted to securing our technology systems, which have long crossed that threshold of maturity and societal-impact, and they should be field-tested during this initial USA rollout phase, and any weaknesses corrected. Not only is this essential, these are proven methods for providing the requisite community engagement, knowledge and control that will instil user confidence and reassurance, and above all, safeguard community trust. 

Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Lee Dronick

“Unfinished business.” Well it is a start

Jeff Butts

To paraphrase a line from Enemy of the State, who’s going to shepherd the shepherds?

Great analysis, Dr. Brooks. As I mentioned earlier, my own internal jury is still out on this subject, but I’m certainly not impressed by or pleased with Cupertino’s response to the discussion, queries, and concerns.