Craig Federighi said in a recent interview that some of the child-protection features were ‘misunderstood’. The new features were implemented and announced last week, and Apple has received a lot of negative publication over the new functionalities. We’ve also recently reported that Apple’s own employees were not happy with the new features.
“We wish that this had come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing, and we can see that it’s been widely misunderstood,”, said Craig Federighi, Vice President of Software Engineering at Apple (via AppleInnsider).
“I grant you, in hindsight, introducing these two features at the same time was a recipe for this kind of confusion,” he continued. “It’s really clear a lot of messages got jumbled up pretty badly. I do believe the soundbite that got out early was, ‘oh my god, Apple is scanning my phone for images.’ This is not what is happening.”
During the interview, Federighi has explained the various processes and steps that Apple takes to ensure that all of the iCloud Photo scannings are done in the most “privacy-protecting way” possible. He also explained how the system scans the images against the CSAM (Child Sexual Abuse Material) database and what iOS offers to protect a child from content that is flagged as potentially dangerous material.
Apple has received a lot of criticism for “analyzing photos on users’ iPhones”, which Federighi said is a “common but really profound misunderstanding”. He then elaborated further with “This is only being applied as part of a process of storing something in the cloud,” he continued. “This isn’t some processing running over the images you store in Messages, or Telegram… or what you’re browsing over the web. This literally is part of the pipeline for storing images in iCloud.”
The interview hosted by Wall Street Journal can be watched above.
What are your thoughts about Apple’s way of scanning photos in the cloud? Let us know in the comments below!