Apple Expands Child Safety Across Messages, Photos, and Siri

Siri and search child safety

Apple is expanding its efforts to combat child sexual abuse material (CSAM) across its platform. Content scanning in Messages, iCloud Photos, and web searches will arrive to iOS | iPadOS 15 and macOS Monterey this fall.

iCloud Photos

Apple first started scanning for CSAM in iCloud in 2019, or at least that’s when the company updated its privacy policy to reflect the change. Per the policy:

We may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.

In the next versions of its operating systems, Apple will move this scanning locally on devices, as it does with other machine learning. Apple calls this scanning algorithm “NeuralHash.”

Before a photo is uploaded to iCloud, a hash or “fingerprint” of the image will be compared against a database from the National Center for Missing and Exploited Children (NCMEC).

This database contains the hashes of known CSAM as discovered in law enforcement investigations. If a photo meets a certain threshold it will be flagged and sent to Apple for manual review. This review ensures the accuracy and makes sure it wasn’t a false positive. Apple says there is a “less than a one in one trillion chance per year of incorrectly flagging a given account.”

The user’s account will then be locked and Apple will share its findings with NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated. The phrase “known CSAM” is important. Only photos that have been previously found in a police investigation and its hash stored in NCMEC’s database will be detected.

Apple has more information on its Child Safety web page as well as a technical explanation in a PDF.

Messages

Another safety measure will be found in Messages. The app will add new tools to warn children and their parents when receiving or sending sexually explicit photos. If a child, who is part of an iCloud Family account, receives or sends an image that the system deems sexually explicit, it will be blurred and the child will be warned via on-screen explanations.

Apple shared screenshots of an example. A pop-up window will appear that says “This could be sensitive to view. Are you sure?” Along with easy-to-understand explanations. If the child opts to receive the image their parents will receive a notification.

Siri and Search

Finally, Apple will provide additional resources for Siri and the built-in search feature. People who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report. Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM.  “These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.”

Along with technical details, Apple’s web page shares technical assessments of the CSAM detection system from cryptography and machine learning experts.

One thought on “Apple Expands Child Safety Across Messages, Photos, and Siri

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.