Apple previewed the CSAM detection system for iOS, iPadOS, and macOS earlier this month. Since its announcement, the CSAM detection feature has been a topic of debate. Not only security researchers but even Apple’s own employees and German parliament members were calling it out. Now a developer has reported that the NeuralHash code used for detecting CSAM photos has been found in iOS 14.3.

An independent developer, Asuhariet Ygvar, posted code for a reconstructed Python version of NeuralHash on Github. He says the code he extracted was from an iOS version that was released over six months ago, despite Apple claiming that current iOS versions do not have CSAM detection. Ygvar posted the code on Github for other security researchers to take a deeper dive into it.

Upon seeing the code, several security researchers were able to find flaws in it. For example, the system was generating the same hash codes for two completely different pictures. Security researchers then warned the authorities because the potential for collisions could allow the CSAM system to be exploited.

However, Apple denies the claims. In a statement to Motherboard, Apple said the version of the NeuralHash system extracted by the developer is not the same that will be used in the final version of the CSAM detection system. Apple says that it made the code publically available for the security researchers to find flaws in it. The company says there’s a private algorithm running in the server that verifies all the CSAM matches after the threshold is exceeded before it goes for human verification.

“The NeuralHash algorithm [… is] included as part of the code of the signed operating system [and] security researchers can verify that it behaves as described,” one of Apple’s pieces of documentation reads. Apple also said that after a user passes the 30 match threshold, a second non-public algorithm that runs on Apple’s servers will check the results.

Matthew Green, a cryptography researcher at Johns Hopkins University says if collisions exist in the NeuralHash system found by the developer, “they’ll exist in the system Apple eventually activates.” It’s possible that Apple may “respin” the function before launch, “but as a proof of concept, this is definitely valid,” he said of the information shared on GitHub.

Nicholas Weaver, a researcher told Motherboard, that if Apple goes onto implementing the NeuralHash code that has been leaked, people can “annoy Apple’s response team with garbage images until they implement a filter” to get rid of false positives.

For your reference, Apple doesn’t directly scan the photos being uploaded to iCloud. Instead, the CSAM detection system divides the photos into hashes that are matched against the hashes generated by photos provided by NCMEC. If there are more than 30 photos in which the same hashes are found, a review is triggered after which an Apple team takes a look at the photo. But, according to Apple, there is another check between the photo triggering the review system and the photos actually making it to the response team, which will eliminate the CSAM database from being exposed.

What are your thoughts on iCloud photo scanning? Do you think CSAM detection should be optional, or do you believe Apple should be able to scan photos for child safety? Let us know your opinion in the comments section down below!