The EU's proposed Chat Control regulation would require messaging providers to scan your messages for illegal content before encryption, on your device. Proponents say it doesn't break end-to-end encryption. Every cryptographer who has studied the proposal disagrees. Here's why, and what it would actually require in practice.
What End-to-End Encryption Actually Guarantees
End-to-end encryption means messages are encrypted on the sender's device and can only be decrypted by the intended recipient(s). No intermediate server can read the plaintext. The encryption and decryption happen only at the endpoints: your device and theirs.
This guarantee depends on exactly one thing: the plaintext is only ever visible on devices that hold the private decryption key. The moment plaintext is made available to any additional process — even one running locally on your device — that guarantee is weakened, because that additional process can send its findings to a third party.
The Core Technical Claim: Chat Control's proponents argue that scanning on the device, before encryption, doesn't compromise E2EE because the encrypted message in transit is still unreadable. Cryptographers respond: if your device is required to run surveillance software on your messages before sending them, it doesn't matter that the message is encrypted afterward. The plaintext was already inspected.
How Client-Side Scanning Works
There are three main technical approaches to client-side scanning (CSS) for CSAM detection.
Exact-match hash comparison (PhotoDNA-style). A database of known CSAM images is hashed using a perceptual hash function. When you share an image, the client computes a hash and compares it against the database. This approach only detects known material; novel images are never flagged.
Perceptual hashing (NeuralHash-style). Apple announced and then withdrew a system called CSAM Detection in 2021 that used neural hash matching. Security researchers demonstrated collisions within days: non-CSAM images that produced the same hash as flagged content.
Machine learning classifiers. A neural network model classifies images or text as likely to contain illegal content. This can detect novel material but has significant false positive rates that become severe at internet scale.
The False Positive Problem at Scale
Consider a classifier with 99.9% accuracy — flagging a message incorrectly only 1 time in 1,000. Applied to a platform with 500 million daily active users sending an average of 10 messages per day, that produces 5 million false reports per day. The human review pipeline that would need to process those reports does not exist and cannot realistically be built.
Either false positives are forwarded to law enforcement — catastrophic for the millions of innocents falsely flagged — or they're filtered by automated systems before human review, which means the oversight is automated rather than human. Neither outcome is acceptable, and the tension between sensitivity and specificity cannot be resolved by building better classifiers. It's a consequence of operating at internet scale.
A system that must scan all private communications to find the small fraction that are illegal will inevitably surveil the overwhelming majority that are not. The architecture of mass surveillance and the architecture of targeted CSAM detection are, technically, the same thing.
The Scope Expansion Problem
Once the infrastructure for mandatory client-side scanning exists, its scope is determined by legislative amendment, not technical constraints. A scanning system built to detect CSAM hashes can be retargeted to flag any content whose hash is on an updated list.
The EU's Chat Control proposal includes CSAM detection and, in its extended provisions, scanning for "grooming" — text-based detection of communication patterns. Text scanning is necessarily more context-dependent and error-prone than image hashing, and the definition of which text patterns constitute grooming is inherently political.
The technical architecture does not distinguish between scanning for child abuse material and scanning for political dissent, journalist sources, or labor organizing. The distinction exists only in the current legal text — and legal text changes.
Apple's Retreat and What It Means
Apple announced its CSAM Detection system in August 2021. Within days, researchers had demonstrated hash collision attacks. The system was never deployed and was formally abandoned in December 2022.
This matters for the EU debate because Apple had access to excellent cryptographic engineering talent and a genuine incentive to make the technology work — and they couldn't build a system that withstood scrutiny. The EU regulation does not specify a technical approach; it mandates the outcome and leaves implementation to service providers. This is not a soluble engineering problem dressed up as a policy question.
Legislative Status and Industry Response
As of early 2026, Chat Control 2.0 has been stalled in the EU Council. Germany, Austria, and several other member states have indicated they will not support a mandatory scanning provision applying to encrypted communications. The European Parliament's LIBE committee voted against the proposal in 2023.
Signal's president Meredith Whittaker stated in 2024 that Signal would cease operations in any EU jurisdiction where Chat Control became law rather than implement client-side scanning. Threema issued a similar statement.
What This Means for Users Now
Chat Control has not passed. No messaging app is currently required to implement client-side scanning under EU law.
The longer-term implications matter for evaluating messaging platforms for sensitive communications:
- Has the service made a public commitment about how they would respond to mandatory scanning requirements?
- Is the client software open source, allowing independent verification that scanning is not occurring?
- Where is the service incorporated, and what legal jurisdiction governs its obligations?
- Does the service's threat model documentation address government compulsion?
The EU Chat Control debate is not isolated. Similar proposals have been advanced in the UK (the Online Safety Act), the US (the EARN IT Act), and Australia. The argument that "responsible encryption" can accommodate lawful access without compromising security for everyone is the same in each context. The cryptographic response is also the same: a backdoor for law enforcement is a backdoor for anyone who discovers it. The math does not change depending on who is asking for the key.
Originally published at havenmessenger.com
Top comments (0)