July 27, 2022 | Techdirt

0

from offload-police-work-onto-the-private-sector department

The source of not great ideas continues to overflow at Lawfare. To be fair, this often overflowing police force is due to its contributors, who are current and former members of spy agencies who have violated rights, broken laws, and gone out of their way to make Internet communications less secure.

We have heard of these contributors before. Ian Levy and Crispin Robinson are both employees of GCHQ. A few years ago, when companies like Facebook were starting to float the idea of ​​end-to-end encryption, Levy and Robinson suggested a workaround that would have caused the same damage as mandatory backdoors, even if the plot was slightly different from the suggestions offered by successive FBI directors.

What was suggested then was some sort of parallel communications network that would allow spies and law enforcement to eavesdrop on communications. Communications would always be encrypted. It’s just that the “good guys” would have their own encrypted channel to eavesdrop on these communications. Theoretically, communications would still be secure, inaccessible to criminals. But opening a side door is not much different from opening a back door. A blind CC may be a little safer than undermining the encryption entirely, but it still opens up another channel of communication – one that could be left open and unattended by interceptors who would likely sense that whatever comes of it is okay. because (lol) spy agencies only target dangerous enemies of the state.

The pair are back. In this article for Lawfare, Levy and Robinson suggest a “solution” that has already been offered (and rejected) by the company that first tried it: Apple. The “solution” is apparently trivially easy to exploit and prone to false positives/negatives, but that doesn’t stop these GCHQ reps from suggesting giving it another twist.

According to the paper [PDF] published by these two GCHQ employees, the key to fighting CSAM (Child Sexual Assault Material) in the age of end-to-end encryption is… more client-side content analysis. And it goes beyond matching local images with known hashes stored by agencies that fight child sexual exploitation.

For example, one of the approaches we propose is to have language models run entirely locally on the client to detect language associated with grooming. If the model suggests a conversation is heading towards a risky outcome, the potential victim is warned and prompted to report the conversation for human moderation. Since the models can be tested and the user is involved in the provider’s access to the content, we don’t believe this type of approach attracts the same vulnerabilities as others.

Well, no vulnerabilities except for vendor access to what are supposed to be end-to-end encrypted communications. If this is the solution, the provider may also not offer encryption at all, as it apparently won’t be encrypted on both ends. The provider will have access to the client side in one form or another, which opens up a security hole that would not otherwise be present. The only mitigating factor is that the provider will not have their own copy of the communications. And if he doesn’t have that, what use is he to law enforcement?

The proposal (which the authors say should not be taken as representative of GCHQ or the UK government) largely works on faith.

[W]We believe that a solid, evidence-based approach to this problem can lead to balanced solutions that ensure everyone’s privacy and security. We also believe that a framework for weighing the pros and cons is needed.

“Disadvantages” is a cool word. If one were more intellectually honest, one could use a word like “inconvenience” or “flaw” or “negative side effects”. But that’s the Newspeak proposed by the two GCHQ employees.

The next sentence makes it clear that the authors don’t know if any of their proposals will work, or how many [cough] disadvantages they will cause. Just spit I guess, but with the innate appeal to authority that comes from their positions and a tastefully formatted PDF.

We do not provide one in this article, but note that the UK’s National Research Center for Online Privacy, Risk Reduction and Adverse Influence (REPHRAIN) does as part of the Safety Tech Challenge Fund of the UK Government, although this will require interpretation in the context of national data protection laws and, in the UK, guidance from the Information Commissioner’s Office.

[crickets.wav]

The authors admit that client-side analysis (whether of communications or content) is far from perfect. False negatives and false positives will be an ongoing problem. The system can easily be tricked into ok’ing CSAM. That’s why they want to add client-side analysis of written communications to the mix, apparently in hopes that a combination of the two will reduce the “inconvenience.”

Supposedly, this can be accomplished with technological wizardry devised by more nervous people and a system of checks and balances that will likely always remain moot, even if hard-coded into moderation guidelines and site policies. law application.

For example, abusers often send existing sexually explicit images of children to potential victims in an attempt to build trust (hoping that victims respond by sending explicit images of themselves). In this case, there is no benefit to an offender creating an image that is classified as child pornography (but is not), since it is trying to affect the victim, not the system. This weakness could also be exploited by sending false positive images to a target, hoping that they will somehow be investigated or tracked. This is mitigated by the reality of how the moderation and reporting process works, with multiple independent checks before any referral to law enforcement.

It just assumes that such “multiple independent checks” exist or will exist. They can not. It may be political for tech companies to simply pass on everything questionable for law enforcement and allow the “pros” to fix the problem. This “solution” is the easiest for tech companies, and because they will be acting in good faith, legal culpability for adverse reactions from law enforcement will be minimal.

This presumptive shrug that sound policies exist, will exist, or will be followed thousands of times a day leads directly to another incorrect assumption: that harm to innocent people will be mitigated due to largely theoretical checks and balances on two ends of the equation.

The second issue is that there is no way to prove which images a client-side scanning algorithm is looking to detect, leaving the possibility of “mission slippage” where other types of images ( those unrelated to child sexual abuse) are also detected. We believe that this problem can be solved relatively easily by slightly modifying the way the world’s non-governmental child protection organizations operate. We would have a consistent list of known bad images, with cryptographic assurances that the databases only contain child sexual abuse images that can be publicly attested and privately audited. We believe these legitimate privacy issues can be technically mitigated and the legal and policy challenges are likely more difficult, but we believe they are solvable.

The thing is, we already have a “coherent list of known bad images”. If we’re not already doing the other things in that sentence (a verifiable database that can be “publicly attested and privately audited”), then the only thing more client-side content analysis can do is produce more false positives and negatives. Again, the authors assume these things are already in place. And they use these assumptions to support their claims that “disadvantages” will be limited by what they assume will happen (“multiple independent verifications”) or assume it has already happened (a database independently verifiable from known CSAM images).

It’s a big question. The other big request is that the document proposes that the private sector do all the work. Companies will need to design and implement client-side analytics. They will need to hire enough people to provide human backup for AI-guided content reporting. They will need to have specialist staff in place to act as liaisons with law enforcement. And they’ll need to have strong legal teams in place to deal with the backlash (I’m sorry, the “downsides”) of false positives and negatives.

If all of this is in place and law enforcement doesn’t engage in mission drift, it will work as the authors suggest: a no-break encryption solution for distributing CSAM via end-to-end encryption at the end. communication platforms. That’s not to say the document doesn’t admit that all the pieces have to come together for it to work. But this proposal raises many more questions than it answers. And yet, the authors seem to believe it will work because it simply can.

Through our research, we have found no reason why client-side analysis techniques cannot be safely implemented in many situations that the company will face. That’s not to say more work isn’t needed, but there are clear paths to implementation that seem to have the required efficiency, privacy, and security properties.

That’s still far from likely, though. And it’s not even in the same neighborhood as theoretically possible if all else goes well.

Filed under: client side analysis, csam, encryption, gchq, monitoring

Share.

About Author

Comments are closed.