Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards

Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards

MacRumours.com

Published

Apple's senior vice president of software engineering, Craig Federighi, has today defended the company's controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple's system for scanning users' photos libraries for Child Sexual Abuse Material (CSAM).
Federighi admitted that Apple had handled last week's announcement of the two new features, relating to explicit content in Messages for children and CSAM content stored in iCloud Photos libraries, poorly and acknowledged the widespread confusion around the tools:

It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.

[...]

In hindsight, introducing these two features at the same time was a recipe for this kind of confusion. By releasing them at the same time, people technically connected them and got very scared: what's happening with my messages? The answer is...nothing is happening with your messages.

Federighi emphasized that Apple's system will be protected against being taken advantage of by governments or other third parties with "multiple levels of auditability."
Federighi also revealed a number of new details around the system's safeguards, such as the fact that a user will need to meet around 30 matches for CSAM content in their Photos library before Apple is alerted, whereupon it will confirm if those images appear to be genuine instances of CSAM.

If and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images, and at that point, only knows about those images, not about any of your other images. This isn't doing some analysis for did you have a picture of your child in the bathtub? Or, for that matter, did you have a picture of some pornography of any other sort? This is literally only matching on the exact fingerprints of specific known child pornographic images.

He also pointed out the security advantage of placing the matching process on the iPhone directly, rather than it occurring on iCloud's servers.

Because it's on the [phone], security researchers are constantly able to introspect what’s happening in Apple’s [phone] software. So if any changes were made that were to expand the scope of this in some way —in a way that we had committed to not doing—there's verifiability, they can spot that that's happening.
When asked if the database of images used to match CSAM content on users' devices could be compromised by having other materials inserted, such as political content, Federighi explained that the database is constructed from images from multiple child safety organizations, with at least two being "in distinct jurisdictions."

These child protection organizations, as well as an independent auditor, will be able to verify that the database of images only consists of content from those entities, according to Federighi.
Tags: The Wall Street Journal, Craig Federighi, Apple child safety features

This article, "Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards" first appeared on MacRumors.com

Discuss this article in our forums

Full Article