Apple will be scanning your phone for child porn

(AP Photo/Bebeto Matthews)

This seems like a rather odd thing to be debating at first glance, but it’s a bit of news that is clearly not as simple as it sounds. This week, Apple announced that it would begin scanning users’ iPhones, looking for evidence of child sexual abuse, and reporting people suspected of engaging in such heinous crimes. What’s not to like about that, right? If there is a way to detect monsters who sexually abuse children and take them down, you wouldn’t expect to hear too many people complaining about it. And yet there are justifiable concerns being raised. Who is going to be responsible for ferreting out this activity and what sort of experience do they have in this specific area of law enforcement? The answer is that it’s being done by an artificial intelligence algorithm, at least in the initial steps of the process. And then “a human” will get involved. Oh, and they’re even going to be scanning supposedly encrypted messages as well. (NPR)

Advertisement

Apple unveiled plans to scan U.S. iPhones for images of child sexual abuse, drawing applause from child protection groups but raising concern among some security researchers that the system could be misused, including by governments looking to surveil their citizens.

The tool designed to detected known images of child sexual abuse, called “neuralMatch,” will scan images before they are uploaded to iCloud. If it finds a match, the image will be reviewed by a human. If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified.

Separately, Apple plans to scan users’ encrypted messages for sexually explicit content as a child safety measure, which also alarmed privacy advocates.

Before digging into this further, the Apple Insider podcast attempts to explain how this will work in this brief video.

One of the first concerns raised by digital privacy advocates is the possibility that perfectly innocent photos on someone’s phone (such as an infant being given a bath) could result in their being flagged as a potential child predator. Apple claims that this won’t be a concern because the AI will only be looking for digital matches of images that are already in the National Center for Missing and Exploited Children (NCMEC) database. Since the algorithm isn’t actually “looking” at the photos but instead scanning all of the bits of data in the image file, it should theoretically never produce a false positive, only flagging the accounts of people trading images of known victims.

Advertisement

But if that’s the case, two questions arise immediately. First, what about all of the other children out there who are being abused but have not yet been discovered? This system sounds like it will do nothing for them. Also, if the system will only be tagging the known photos from the NCMEC database, once a match is found, what is the reason for having “a human being” review it further? They’re making it sound as if that stage of the process is already essentially bulletproof.

The darker and perhaps more conspiratorial question involves how Apple is able to so easily access and examine your supposedly encrypted messages and images. And if they’re going to start doing it now for this admittedly noble goal, have they been doing it already for other purposes? A wise person once said, “If you are not paying for it, you’re not the customer; you’re the product being sold.” (Or at least your data is.) Every Google user already knows that their data is neither private nor safe from the prying eyes of the tech giants. If you mention something about buying a new BBQ grill in an email to a friend, advertisements for grills will mysteriously start showing up on every web page you open. Why would Apple be any different?

Advertisement

In the linked article, a cryptography researcher from Johns Hopkins University raises another worrying possibility. Testing similar systems, their researchers were able to fool the algorithm by sending seemingly innocuous images to other users that still wound up being flagged. Innocent users could still find themselves on a potential sex offender list as a result of such an attack. Apple is claiming that can’t happen, but I hope you’ll pardon me for being a bit skeptical about their openness and transparency on this subject.

There’s probably no way to stop this, however, since it’s not the government doing it. Apple is a private corporation and they are largely able to write their own rules. If it winds up taking a few more child pornographers off the streets, I suppose we can’t say it’s entirely bad. It’s just a shame they didn’t figure out a way to catch Arizona Democratic State Senator Tony Navarrete regarding his alleged “hobbies” before it was too late.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement