Mind Matters Natural and Artificial Intelligence News and Analysis
closeup-of-unrecognizable-little-girl-using-smartphone-focus-on-hands-scrolling-through-internet-copy-space-stockpack-adobe-stock
Closeup of unrecognizable little girl using smartphone, focus on hands scrolling through internet, copy space
Photo licensed via Adobe Stock

Drawing a Line: When Tech To Keep People Safe Seems Dangerous

A dispute at the Washington Post about tech aimed at detecting child sex abuse highlights some of the issues
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Princeton computer scientists Jonathan Mayer and Anunay Kulshrestha thread that needle::

Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties firestorm, and Apple’s own employees have been expressing alarm. The company insists reservations about the system are rooted in “misunderstandings.” We disagree.

We wrote the only peer-reviewed publication on how to build a system like Apple’s — and we concluded the technology was dangerous. We’re not concerned because we misunderstand how Apple’s system works. The problem is, we understand exactly how it works.

Opinion by the Editorial Board: Apple’s new child safety tool comes with privacy trade-offs — just like all the others

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we’re also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption.

Jonathan Mayer and Anunay Kulshrestha , “Opinion: We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous” at Washington Post (August 19, 2021)

The Editorial Board at the Washington Post disagrees:

The practice of on-device flagging may sound unusually violative. Yet Apple has a strong argument that it’s actually more protective of privacy than the industry standard. The company will learn about the existence of CSAM only when the quantity of matches hits a certain threshold, indicating a collection. Otherwise, all images will stay where they’ve always been, not uploaded to the Web in a decrypted format and therefore not viewable by the company, the government or anyone else (at least not without a warrant). There’s a bonus: Demands from governments to bar encryption altogether are escalating, and dodging them should prove easier with a tool in hand that can identify a scourge such as CSAM even when encryption is in place.

The Editorial Board, “Opinion: Apple’s new child safety tool comes with privacy trade-offs — just like all the others” at Washington Post (August 13, 2021)

Hear both sides while you can. The main thing to see here is that every reasonable person agrees that child safety is fundamental. For that very reason, technologies aimed at ensuring it could lead to overreach and unintended consequences. That’s much more likely in the case of technologies to ensure child safety than, say, in the case of technologies intended to protect career criminals — in which most of the public is hardly heavily invested.


You may also wish to read: Practicing the basics: Teaching math facts in the classroom. How to help students make deeper connections within mathematics with creative games. (Jonathan Bartlett) March 20, 2021


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Drawing a Line: When Tech To Keep People Safe Seems Dangerous