The iPhone maker expands child safety measures at the expense of user privacy
When the FBI and U.S. government demanded Apple to unlock a dead terrorist’s iPhone, Tim Cook hit back with the following response on his company’s website:
“They have asked us to build a backdoor to the iPhone.
Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.”
— an excerpt from the open letter.
Evidently, Apple cooperated with the investigation by providing the FBI with every byte of data from the iCloud backups. However, they declined the requests to decrypt the device’s software as it would undermine the security of millions of other users.
This was back in 2016 when despite data encryption not being a huge deal for most smartphone users, Apple fought the lawmaker and took a stance for its users.
Fast forward to today, Apple’s relentless focus on privacy has made users more aware of how their data is tracked. Unarguably, privacy has become the biggest selling point of an iPhone today.
In the past few years, Apple took huge strides towards data security by introducing opt-in ad tracking, letting users opt-out of email read receipts, and displaying data being collected by apps through nutrition labels.
But, just when we thought Apple is the knight in shining armor of our data, they played a regressive move by unleashing an invasive on-device scanning technology on our devices.
A tool that despite its laudable goal (to combat child sex abuse) hasn’t gone down well with the privacy advocates and security experts.
Apple Can Notify Parents if Their Child Opens a Nude on iMessage
Privacy has always been the central focus of Apple. Particularly, in parental control, Apple has offered better ways to administer a minor’s phone. Like by letting parents block and restrict certain apps and website content from appearing on their child’s phone.
With iOS 15, alongside a third-party Screen Time API that gives more flexibility in monitoring a child’s phone usage, Apple is now introducing a new feature that scans messages for nude photos.
Essentially, the parent can opt-in the communications safety feature on the Apple ID of children under the age of 13. This lets the iPhone maker scan every photo sent or received in the iMessage using their on-device AI classifier.
In the scenario where a child is about to view a nude photo in iMessage, they’ll get a warning — bypassing which the parent would get notified.
Despite the upcoming feature essentially letting parents spy on their children, this feature doesn’t pose much of a threat to privacy per se — as barring the parents' nobody gets notified — not even Apple.
However, the feature does have a few implications. For one, it could affect queer kids who’re curious about gender and sexuality.
Secondly, it opens the door for misusing and abusing. Consider a user slyly changing the date of birth for their partner or family member’s account to under 13. Currently, there’s no way for Apple to verify the birth date of its users.
Lastly, one doesn’t know if Apple decides to use the AI classifier to strengthen its CSAM detection system in the future.
Apple Will Scan Your Photos for Child Abuse Content Before Uploading to iCloud
It’s been a few weeks since the Child Sexuality Abuse Material (CSAM) scanning technology was announced by Apple. While initially, the CSAM detection and iMessage communication safety was perceived to be interconnected — which lead to confusion. Gladly, this isn’t the case(though it took Apple to release a lengthy FAQ for clarifying this).
Unlike what it may seem from the first look, the CSAM scanning technology doesn’t rely on AI classification. Instead, it uses the power of cryptography to flag photos that match with a known set of child exploitation image hashes that are provided by NCMEC and child safety organizations.
Apple has explained the underlying system design at length in their blogpost.
In a nutshell, the CSAM detection will check every photo for child abuse content before it’s uploaded to iCloud. In its first phase, the technology uses the NeuralHash
algorithm to generate a hash number for every image. Apple terms this hash as a safety voucher. This safety voucher is encrypted with the image data and sent to the iCloud server.
To ensure user’s privacy isn’t compromised, the system adopts two cryptography techniques: Private Set Intersection and Threshold Secret Sharing. Together they ensure that nobody, not even Apple can track which image hashes have been flagged or the number of matched images until a certain threshold is met(which seems to be 30).
While the CSAM scanning system does seem sophisticated and well-thought-out in design, yet despite the noble intentions, it can be easily abused for surveillance and censorship by the governments.
All it takes is a new supply of image hashes to do a warrantless search or weaponize against some users. Worse, neither Apple nor its users would ever know if the CSAM images are legit as the tech giant doesn’t check original photos.
So the story is, Apple is about to use your own device’s computing power to run a black box algorithm on you that’ll eventually call the cops if a certain number of hash collisions are met and reviewed by their moderator. To top it all, you’d have no idea which image led to the arrest.
Basically, the iPhone maker has marked everyone as always guilty until proven innocent.
Another dystopian nightmare is a hacker tricking the system by generating images that successfully collide with the CSAM hashes (the underlying perceptual hash function can produce the same hash number for different images) and send them to innocent users via AirDrop or file sharing. Unless your iCloud upload is turned off, the photos would be immediately uploaded to the server that’ll eventually trigger a false alarm.
Final Thoughts
Child abuse for pornography and exploitation is a horrific crime and it requires tackling at a global level.
Hence, given the sensitive issue, no one can argue with Apple’s attempt to build a child safety system regardless of how anti-privacy it may be — which is why the tech giant found it convenient to introduce this in the first place (though they probably didn’t expect the storm of outrage and criticism).
The world would unanimously agree that checking for CSAM isn’t the real problem here. The issue is more about expanding the technology for other things that’ll violate user privacy and press for censorship.
This begs the question: Why does Apple want to be the police? I’d like to believe that the iPhone maker might’ve been in a tough spot before unleashing government-backed spyware on us. They might’ve been forced to increase the amount of scanning to match up with the likes of Google, Microsoft, and Facebook.
Yet, five years ago, Tim Cook took a stand against the FBI and government by refusing to unlock a terrorist’s phone. His own words then:
“In a perfect world, where none of the implications would exist, we would obviously do it. But we don’t live in a perfect world.”
I doubt if Apple has thought out about the real-world implications of a client side-scanning technology.
Sardonically, Apple was supposed to stand up for its user’s security today even more today. Instead, they’ve created the very thing they once swore to destroy: a backdoor.
One can only hope that Apple is prepared for the slippery slope they’ve just stepped on.

Comments / 0