Skip to content
Chat Control 29 min read

Chat Control: The Technical and Legal Case Against Mass Scanning

CJEU sign directing visitors to the Ancien Palais, with the base of the Comenius tower and Anneau building visible.
Luxofluxo, CC BY-SA 4.0, via Wikimedia Commons

On the 26th of March 2026, the European Parliament voted down yet another extension to Chat Control 1.0. The regulation that permitted mass scanning of private communications will expire on the 3rd of April 2026. The political debate will frame this as a defeat for the surveillance agenda.

It is not. Chat Control 2.0, broader in scope, with stronger political backing, and structured to avoid the legal vulnerabilities that plagued its predecessor, remains under active consideration. What has changed is the architecture, not the ambition.

This piece examines what the applicable law actually requires, what the relevant fundamental rights protect, and what the proposed detection technologies can and cannot do.

An Introduction to Chat Control

When talking about Chat Control, it is important to distinguish between Chat Control 1.0 and Chat Control 2.0, because their approaches and consequences meaningfully differ.

As such, let's start with Chat Control 1.0 which is formally known as Regulation (EU) 2021/1232 [1] and which was adopted on the 14th of July 2021 and which will expire on the 3rd of April 2026 unless there is a last minute extension.

This was a temporary derogation from the EU ePrivacy Directive (ePD) under article 15(1) [2] which created a temporary, voluntary framework allowing online communication providers (such as messaging apps and email services) to scan private messages for child sexual abuse material (CSAM) and grooming behaviour.

More specifically, Chat Control 1.0 permitted providers to automatically and indiscriminately scan the private messages, visuals, and text chats of all users using algorithms and AI to search for suspicious content and text patterns, including grooming behaviour. Crucially, this scanning took place on a purely voluntary basis
at the discretion of the platforms themselves without a court order and without any suspicion of a crime regarding the monitored citizens.

This immediately exposes both fundamental technological problems and the mechanism by which regulators avoided direct confrontation with fundamental rights such as those codified in Article 8 [3] of the European Convention on Human Rights (ECHR) which says the following:

1. Everyone has the right to respect for his private and family life, his home and his correspondence.

2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

Which was subsequently guaranteed for citizens of all member states through Article 7 of the EU Charter of Fundamental Rights [4] which corresponds to article 8 of the ECHR

Notice the key phrase in Article 8(2): "no interference by a public authority."

By structuring Chat Control 1.0 as a voluntary framework where private companies choose to scan messages rather than the state directly ordering them to do so, the EU Commission effectively routed around Article 8's public authority requirement. The state was not intercepting correspondence, a private company was. That this scanning only became legally permissible because the EU created a specific derogation to enable it was left unaddressed.

Whether state-structured private scanning could engage Article 8 ECHR through the positive obligations doctrine is a question this piece leaves open. The derogation point is sufficient: voluntary or not, the scanning required specific EU legislative authorisation to be lawful, and that authorisation must itself satisfy the proportionality requirements examined below.

However, Article 7[4-1] is not the only fundamental right in play here, Article 8 [5] of the EU Charter is equally relevant:

1. Everyone has the right to the protection of personal data concerning him or her.

2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

3. Compliance with these rules shall be subject to control by an independent authority.

The mass scanning of private communications is, by definition, the processing of personal data and it is processed without the consent of the person concerned, and without any individualized legal basis tied to suspicion of wrongdoing. The "legitimate basis laid down by law" that Regulation 2021/1232[1-1] provides is precisely what critics argue fails the proportionality test.

This brings us to one of the core legal concepts that will run throughout this entire discussion: proportionality. Under EU law, any limitation of a fundamental right must be:

This is articulated in Article 52(1) [6] of the Charter, which provides that any limitation on the rights and freedoms recognised by the Charter must be provided for by law, respect the essence of those rights, and, subject to the principle of proportionality, be necessary and genuinely meet objectives of general interest recognised by the Union.

This same issue was also brought up in the Rule 144 question to the commission: E-003645/2025 by the MEP from Poland: Ewa ZAJĄCZKOWSKA-HERNIK in regards to chat control 2.0 [7]

The Commission's response to this question which was published on the 17th of December 2025 [8] will be examined later in the context of Chat Control 2.0.

Chat Control 1.0, even in its voluntary form, struggled significantly on all three of these criteria. The following section establishes why, by examining what the detection technologies actually do, and where they structurally fail, before returning to this framework to assess Chat Control 2.0.

The Technical Challenges of Chat Control 1.0

To evaluate Chat Control 1.0 honestly, you have to understand what the detection systems it enabled actually do and where they structurally fail. This matters not just technically but legally, because the proportionality analysis that determines whether a fundamental rights limitation is permissible depends entirely on whether the measure actually works.

Hash Matching: The Known CSAM Problem

The primary technology deployed for known CSAM detection is perceptual hashing, most commonly Microsoft's PhotoDNA, which is the de facto industry standard. The mechanism is straightforward enough: a detection tool creates a hash, a unique digital fingerprint, for an image, then compares it against a database of hashes of known CSAM to find matches. Unlike cryptographic hashing which only catches exact copies, perceptual hashing is designed to match content even where an image has been resized, cropped, or rotated.

For its narrow intended use case, detecting previously catalogued images circulating on public platforms, this is a reasonable tool. Deploying it across the private communications of an entire continent is a different proposition entirely.

Hany Farid, the algorithm's inventor, claimed in 2019 congressional testimony that PhotoDNA has a false positive rate of 1 in 50 billion,[9] a figure that gets cited constantly in policy debates but has not been independently replicated under real-world conditions at scale and deserves more scrutiny than it usually receives.
The Commission's own impact assessment for Chat Control 2.0 cites it without qualification,[10] illustrating how a manufacturer's testing claim becomes, through official repetition, an apparently authoritative baseline for legislation affecting hundreds of millions of people.

A claim that is highly questionable in light of the vulnerabilities of perceptual hashing against adversarial attacks and other research findings on the limitations of PhotoDNA.
Real-world deployment data tells a different story: LinkedIn's use of PhotoDNA showed that in 2021, only 41% of what the system flagged as CSAM actually constituted illegal material in the EU, a false positive rate approaching 60% in production. [11]

This gap between the theoretical claim and the operational reality is not a rounding error. It is the difference between a surgical instrument and a blunt one.

Scale compounds this problem in a way that headline accuracy figures systematically obscure. This is the base rate problem, and it is almost entirely absent from the political debate despite being the most important statistical argument against mass scanning.

Consider a simplified model: a classifier with 99.9% accuracy applied to 100 million users each sending 10 images per day produces 1 billion scan events daily. At a 0.1% false positive rate, that is 1 million innocent people flagged per day. The math here is not speculative. It is the same statistical principle that makes mass population screening for rare medical conditions ethically fraught even with highly accurate tests. When the condition you are screening for is rare relative to the population being scanned, even good classifiers generate more false positives than true positives. Chat Control 1.0 applied this logic to the private communications of an entire continent, with no individualised suspicion and no meaningful remediation mechanism for those incorrectly flagged.

There is also a structural evasion problem that no accuracy improvement can fix. Independent research has demonstrated that robustness against even minimal cropping of approximately 2.5% of the image degrades the detection significantly depending on the threshold applied.[12]

This vulnerability is worth dwelling on because it is not incidental to PhotoDNA's design, but follows directly from the property that makes the system useful. For PhotoDNA to serve its intended purpose, the hash function must respond gradually and continuously to minor visual changes rather than shifting abruptly, so that a resized or cropped image of known CSAM still produces a hash value close enough to the original to register as a match. Deryck et al. demonstrate that this gradualism has a precise mathematical consequence: the function is almost entirely differentiable across its computation, meaning the relationship between any given pixel value and the resulting hash output can be calculated rather than merely estimated. [13]

That calculability is exactly what the attacks in that paper exploit, not by searching randomly for modifications that evade detection, but by computing with mathematical precision which pixel-level changes will move an image's hash value away from the known-CSAM database, achieving this in seconds to minutes on an ordinary laptop.[13-1] The same precision runs in the other direction: it is equally straightforward to engineer a false positive by modifying an innocent image until its hash value falls close enough to known illegal material to trigger a match, potentially causing an innocent person to be reported to law enforcement without any basis.[13-2]

The system is simultaneously too broad for the innocent and too narrow for the guilty.

And then there is the hard ceiling that no engineering can overcome: perceptual hashing cannot detect new or novel CSAM that has not already been identified, catalogued, and added to the hash database. Hash matching only catches what investigators have already found. The actual production of new abuse material, the harm the legislation frames itself as combating, is structurally invisible to this technology.

AI Classifiers: Unknown CSAM and Grooming

To address the new material gap, Chat Control 1.0 also permitted platforms to deploy AI classifiers to detect both unknown CSAM and grooming behaviour. This is where the technical difficulty escalates substantially.

The detection of new or unknown CSAM cannot be achieved through hashing and can only be attempted through AI-based machine learning classifiers. It is particularly challenging to detect the age of a person shown in content, especially to determine whether they are a teenager or a young adult.

That last point is not a minor caveat. An AI system that cannot reliably distinguish a 17-year-old from a 19-year-old is not making a legal determination. It is producing a probabilistic output that may then trigger law enforcement reporting against a person with no recourse and no notice. From an incident response perspective, the parallel to forensic tooling is instructive: we would never accept an automated forensic tool that misattributed artefacts at meaningful rates as legally sufficient evidence of anything. The output of an AI classifier is a signal, not a finding, and Chat Control treated it as the latter.

Grooming detection is a harder problem still. Unlike image classification, which at least operates on a defined visual domain, grooming detection requires an AI to infer intent and relational dynamics from natural language in real time. This is structurally distinct from pattern matching: it requires a system to determine the state of mind of an actor from the content of their messages alone. Yet as Wang et al. (2025) demonstrate in their systematic review of abuse detection algorithms, state-of-the-art NLP approaches cannot reliably infer user intent from short text, particularly without sufficient context, and this represents a fundamental limitation rather than an engineering gap that additional training data can close.[14]

The grooming detection literature confirms this problem extends specifically to this domain. Machine learning models struggle to generalise across the diverse range of grooming conversations because predators employ varying strategies, and the tone of those strategies matters in ways classifiers handle unevenly: research has found that negatively-toned grooming conversations exhibit nuanced patterns that are significantly harder to distinguish from non-grooming communication than positively-toned ones, meaning classifier performance varies substantially depending on the behavioural profile of the predator rather than remaining stable across the full range of real-world cases.[15] Cross-dataset evaluation compounds this further: models trained on one dataset show meaningful performance degradation when applied to a different one, which is precisely the condition that mass deployment represents.[16]

The training data problem is structural rather than merely historical. The datasets underlying most grooming detection research, primarily the PAN12 corpus and the Perverted Justice transcripts spanning 2006 to 2016, are chatroom-based, linguistically dated, and generated under sting operation conditions using honeypot child actors whose communication patterns may not accurately represent how real children interact online.[16-1] A classifier trained on this data is not detecting grooming as it occurs in 2026. It is detecting statistical patterns that appeared in prosecuted chatroom cases from a different technological era, evaluated against a population whose communication has fundamentally changed.

The consequence of deploying systems with these documented failure modes at continental scale is not difficult to reason through. A classifier that cannot reliably distinguish a parent discussing online safety with their child, a teacher communicating with students, or teenagers exchanging messages from the patterns its training data associates with grooming will generate false positives across precisely the categories of innocuous adult-child communication that are most common. This is not speculation about edge cases. It is the logical consequence of the intent inference failure Wang et al. document,[14-1] the generalization failures Hamm and McKeever identify,[15-1] and the cross-dataset degradation Street et al. measure.[16-2] The output of such a classifier is a probabilistic signal about linguistic patterns, not a determination about intent. Chat Control's framework treats it as the latter.

This asymmetry has a further consequence that the policy debate consistently ignores: the failure modes run in opposite directions for innocent users and actual predators. Those with knowledge of how these classifiers work, the linguistic patterns they target, the training data they rely on, can adapt their communication to avoid detection with relative ease. Those without any such knowledge, the parents, teachers, and teenagers whose innocuous conversations superficially resemble the training distribution, cannot. The practical result is a system whose burden falls most heavily on those it was not designed to catch, while remaining navigable for those it was.

What the Technical Reality Means Legally

A system that produces false positives at the rates documented above is not appropriate: it does not reliably achieve its stated objective.

A system that is structurally evadable by the bad actors it targets while surveilling everyone else is not necessary: the burden falls on precisely the population it was not designed to catch.

Whether the framework nonetheless satisfies the third criterion, that the benefit not be outweighed by the harm, requires engaging with what that benefit actually is, and whether the technology is capable of delivering it.

The Child Protection Case and Why It Does Not Resolve the Proportionality Question

Proportionality analysis under Article 52(1) of the Charter requires that the benefit of a measure not be outweighed by the harm it causes. That analysis cannot be conducted honestly without engaging with the benefit side of the ledger, and the benefit at stake here is not abstract.

Child sexual abuse material records the sexual exploitation of real children.
Its continued circulation online causes documented ongoing harm to survivors, independent of the original abuse: research consistently finds that victims experience serious long-term psychological consequences, and that the knowledge images of their abuse continue to circulate compounds that harm over time.[17] The Commission's own impact assessment frames this directly, noting that the distribution of CSAM constitutes a form of re-victimisation every time the images and videos are seen.[10-1]
The scale of that ongoing harm is not marginal: EU-related reports from service providers rose from 17,500 in 2010 to over one million in 2020, with those reports encompassing 3.7 million images and videos of known CSAM and 528,000 of new material, and in some member states up to 80% of investigations are launched on the basis of provider reports alone.[10-2]

Grooming causes serious and lasting psychological harm to its victims regardless of whether contact abuse follows.[17-1][18] The legislative objective that Chat Control pursues is not manufactured. It corresponds to a documented and serious harm that EU law has a legitimate interest in addressing.

The strongest version of the case for mass scanning is not that the technology is accurate enough to justify the intrusion. It is that even an imperfect system saves some children who would otherwise not be saved. If hash matching generates reports that lead to the removal of circulating material and the identification of abusers, that is a real benefit even if the false positive rate is high. If grooming classifiers identify some conversations that result in intervention before contact abuse occurs, that intervention has genuine value even if the classifier misses most cases. On this argument, the proportionality question is not whether the technology meets some accuracy threshold but whether any level of child protection benefit, however uncertain, outweighs the privacy cost imposed on the wider population.

A second version of the argument goes further: even an imperfect system creates deterrence effects. Abusers who cannot be certain what the classifier can and cannot detect may be deterred regardless of actual detection rates. On this view the proportionality question is not about accuracy at all but about whether the behavioural effect on potential offenders justifies the intrusion on the wider population.

This argument has genuine force but does not survive the asymmetry identified above. Deterrence depends on uncertainty about detection. The technical analysis in the preceding section establishes that the failure modes of both hash matching and grooming classifiers are not random, they are structured and knowable. Those with knowledge of how classifiers work, the linguistic patterns they target, the training data they rely on, can and do adapt. Deterrence effects therefore fall most heavily on unsophisticated actors while leaving the most dangerous offenders, those most likely to seek out and share novel material structurally unaffected. A deterrence benefit that is inverse to the severity of the harm being deterred does not resolve the proportionality question. It compounds it.

Hash matching cannot detect new material. The abusers producing and distributing novel CSAM, the harm the framework most urgently needs to address, are structurally invisible to the primary detection technology. AI classifiers cannot reliably determine whether subjects in images are minors, cannot consistently infer intent from text, and perform meaningfully worse outside their training distribution, which is precisely the condition mass deployment represents. Grooming detection is evadable by those with knowledge of how classifiers work and the linguistic patterns they target, which is exactly the population of predators most likely to have that knowledge.

The framework's structural failure is not symmetric: it is least effective precisely where the harm is greatest, and most burdensome precisely where innocence is most certain.
Against that uncertain benefit, the harm side is documented. Mass scanning of private communications without individual suspicion, at the false positive rates observed in production environments, affects hundreds of millions of European citizens whose communications are processed without their knowledge, without individualised suspicion, and without meaningful recourse. The proportionality calculus cannot be resolved in the framework's favour when the benefit it is intended to deliver is structurally undermined by the technology it relies on.

One further argument deserves acknowledgment: that the technical failures documented above are contingent rather than permanent, and that sufficiently improved detection technology might eventually satisfy the appropriateness criterion. This is a reasonable position and should not be dismissed. Detection accuracy will improve. False positive rates will fall. But technical improvement does not resolve the proportionality problem, because mass scanning of private communications without individualised suspicion is a structural feature of the framework, not a consequence of its current technical limitations. The interference occurs at the point of scanning, not at the point of flagging. Better technology would reduce the number of innocent people incorrectly reported to law enforcement. It would not change the nature of what is being done to the communications of hundreds of millions of people who have given no indication of wrongdoing.

Chat Control 2.0

Chat Control 2.0 refers to the proposed Regulation laying down rules to prevent and combat child sexual abuse, submitted by the Commission on 11 May 2022 under reference COM(2022)0209.[19] The original proposal would have required providers to scan all communications for CSAM and grooming. The proposal has undergone significant modification through the legislative process: mandatory detection obligations have been dropped, but voluntary scanning remains permissible, and large providers face risk assessment and mitigation obligations under Article 3 of the draft text, with mitigation strategies including client-side scanning and the breaking of end-to-end encrypted communications.[20] The most recent publicly available text at the time of writing is the Council's partial general approach of 13 November 2025, which forms the basis of the analysis that follows.

The November 2025 general approach represents a significant retreat from the original Commission proposal. The mandatory detection obligations that defined the original text have been deleted entirely, and Recital 17a explicitly states that nothing in the regulation should be understood as imposing detection obligations on providers.[20-1] For those following the debate, this might appear to resolve the most serious objections.

It does not.

Three features of the general approach preserve the substantive architecture of mass surveillance while removing its most legally exposed elements. The Chat Control 1.0 derogation from the ePrivacy Directive, framed throughout its existence as a temporary emergency measure, is made permanent through direct amendment to Regulation 2021/1232.[1-2] [19-1] Voluntary detection is explicitly listed as a possible risk mitigation measure that providers may indicate willingness to carry out as part of their risk reporting.[20-2] And a review clause requires the Commission to assess within three years the necessity and feasibility of reintroducing detection obligations, including an evidence-based assessment of the reliability and accuracy of the relevant available technologies.[20-3]

Read together, these three elements describe a legislative strategy rather than a retreat. The temporary becomes permanent, the voluntary becomes structurally incentivised, and the detection obligations that were removed are explicitly anticipated for reintroduction once the technology can be made to appear reliable enough. The question this piece has been examining, whether the detection technologies are capable of satisfying the proportionality requirements of Article 52(1) of the Charter, is the same question the review clause defers rather than answers.

From Temporary Derogation to Permanent Infrastructure

Chat Control 1.0 was framed from the outset as a temporary measure, a time-limited derogation from the ePrivacy Directive pending the development of a permanent framework. The general approach removes that temporal limitation by amending Regulation 2021/1232 directly, converting what was presented as a provisional arrangement into a permanent feature of EU communications law.[1-3][19-2][20-4]

The temporary character of Chat Control 1.0 was never a proportionality justification in any meaningful sense. Whether scanning the private communications of an entire continent without individual suspicion satisfies the requirements of Article 52(1) of the Charter does not turn on whether the permission has an expiry date. What the temporal limit provided was political cover, allowing legislators to defer the harder proportionality questions to a future framework that could be developed with more care and more evidence. Making the derogation permanent removes even that deferral. The scanning that was presented as a provisional measure pending proper legislative consideration is now simply the law, with no sunset and no requirement that the underlying detection technology demonstrate the reliability that a proportionality analysis demands.

The general approach does not engage with this shift directly. The recitals frame the permanence as a necessary consequence of the wider regulatory framework rather than as a substantive policy choice requiring independent justification. But the legal effect is clear: the derogation has become the norm, and the proportionality questions that temporariness allowed legislators to defer must now be confronted directly. The technical analysis in the preceding section indicates those questions remain unresolved.

The Risk Categorisation Pipeline

The removal of mandatory detection obligations from the general approach might appear to resolve the most serious privacy objections. However, the risk categorisation framework that replaces them deserves closer examination.

Under Articles 3 to 5 of the general approach, providers of hosting services and publicly available interpersonal communications services are required to conduct risk assessments of their platforms and submit those assessments to national coordinating authorities.[20-5] The coordinating authority then categorises each service or component as high risk, medium risk, or low risk based on objective criteria including the type and architecture of the service, the provider's safety policies, and patterns of user behaviour. Providers classified as high risk face obligations to develop and implement technologies to mitigate the identified risk.[20-6]

This is where the structure becomes significant. Recital 17 of the general approach explicitly lists voluntary activities under the Chat Control 1.0 derogation as a possible mitigation measure, and states that providers may indicate their willingness to carry out such activities as part of their risk reporting, with the possibility of eventually being issued a detection order if deemed necessary by the competent national authority.[20-7] Read carefully, this creates a regulatory pathway from risk classification through mitigation obligation through voluntary scanning toward potential detection order. At no point is a provider formally compelled to scan. The incentive structure, however, systematically rewards those who do.

This is the same public/private routing logic identified in the analysis of Chat Control 1.0, now institutionalised into a governance framework. A provider that does not voluntarily scan faces a higher residual risk classification, which triggers stronger mitigation obligations, which increases the likelihood of a detection order. A provider that voluntarily scans demonstrates mitigation effort, which reduces its regulatory exposure. The choice is formally free. The consequences of each option are not symmetric.

The proportionality implications are direct. A framework that creates structured incentives toward mass scanning of private communications without formally mandating it does not escape the Article 52(1) requirements simply because the formal mandate is absent. The question of whether the measure is appropriate, necessary and proportionate applies to the actual effect of the framework on the communications of European citizens, not to the formal characterisation of each provider's participation as voluntary.

Detection Orders and the Limits of Judicial Oversight

The general approach introduces a judicial authorisation requirement for detection orders that was entirely absent from Chat Control 1.0. Before a detection order can be issued, a competent judicial authority or independent administrative authority must conduct a case-by-case assessment of necessity and proportionality.[20-8] This is presented in the legislative framing and in the Commission's response to the Rule 144 question as the framework's primary safeguard for fundamental rights.[8-1]

The mechanism is directional in a way that framing obscures. The detection order procedure exists to compel providers who have not adequately mitigated identified risk. It does not constrain providers who are scanning too broadly, too inaccurately, or in ways that generate disproportionate interference with the private communications of innocent users. A provider that voluntarily scans and generates false positives at the rates documented in the preceding section faces no judicial scrutiny of that activity under this framework. A provider that declines to scan faces the prospect of a detection order. The judicial mechanism therefore functions as a compliance floor, ensuring that providers do enough to satisfy coordinating authorities. It provides no ceiling on the scope, accuracy, or proportionality of what providers actually do when they scan voluntarily. The fundamental rights of citizens whose communications are processed are not protected by a mechanism designed to ensure that processing occurs.

This directionality also means the detection order pathway may rarely be reached in practice. A provider that voluntarily scans as a risk mitigation measure never reaches the stage where a coordinating authority requests a detection order. The incentive architecture described in the preceding section is designed precisely to ensure that most providers do not resist. The judicial oversight that the general approach presents as its central safeguard therefore applies primarily to the edge case where a provider declines to cooperate, not to the general case where voluntary scanning operates at scale across the platforms used by hundreds of millions of European citizens. The safeguard governs the exception. The rule operates without it.

There is a further problem that the directional framing of the mechanism cannot resolve. Even in cases where a detection order is issued and judicial authorisation obtained, the authorisation addresses the procedural legitimacy of issuing the order, not the technical reliability of executing it. A judge assessing necessity and proportionality under the general approach is evaluating whether issuing a detection order is procedurally warranted given the provider's assessed risk profile, not whether the technology that will execute it can reliably achieve its stated purpose at the accuracy levels the appropriateness criterion requires. The authorisation procedure contains no mechanism for independent technical verification of the classifier or hash function to be deployed, and the coordinating authority's assessment of technical feasibility relies on information supplied by the provider itself. The Commission's response to the Rule 144 question[7-1], when asked precisely how the proposal satisfies Article 52(1) given the risk of unintended consequences, pointed to judicial oversight and the impact assessment without engaging with the technical reliability question that appropriateness demands.[8-2] The impact assessment it points to is itself grounded largely in manufacturer-supplied figures and stakeholder consultation rather than independent peer-reviewed verification,[10-3] which means the chain of deference terminates not in evidence but in the same unverified claims this piece has already examined.

A measure whose execution depends on detection technology with documented real-world failure rates raises serious questions about whether it is capable of achieving its objective regardless of the procedural safeguards surrounding its authorisation.

Judicial oversight is necessary. It is not sufficient, and in the structure the general approach creates, it is largely beside the point.

Client-Side Scanning and the Encryption Question

The general approach states explicitly that cybersecurity and encryption are comprehensively protected under the framework.[20-9] This claim deserves examination alongside the list of mitigation strategies providers may adopt, which includes client-side scanning.

Client-side scanning scans the content of a message on the user's device before it is encrypted and transmitted. From a narrow technical perspective, the encryption itself remains intact: the message that leaves the device is encrypted, and no third party can read it in transit or at rest. The general approach can therefore assert truthfully that encryption is not broken. From the perspective of Articles 7 and 8 of the Charter, however, the relevant question is not whether the encryption algorithm is intact but whether the content of private communications can be accessed by third parties before the user sends them. Client-side scanning answers that question affirmatively. The surveillance occurs before encryption, not despite it. The technical distinction the general approach relies upon does not correspond to a meaningful distinction in the rights interference that results.

The European Court of Human Rights addressed the relationship between encryption and the right to private life directly in Podchasov v. Russia, decided on 13 February 2024.[21] The case concerned Russian legislation requiring Telegram to store communications data and to provide law enforcement with decryption keys on demand. The Court held unanimously that weakening end-to-end encryption or creating backdoors constitutes a violation of Article 8 of the ECHR, finding that such measures enable routine, widespread and indiscriminate surveillance and are disproportionate regardless of what procedural safeguards accompany them. Although Russia is no longer a party to the ECHR, the judgement is authoritative interpretation of Article 8 binding on EU member states.

The Court's reasoning rests on a specific finding: that a decryption order compromises the security of all users per se, because the mere existence of a technical architecture enabling access to any user's private data introduces an indiscriminate vulnerability, irrespective of how often or selectively that capability is actually used. It is this shift from actual access to technical capability that defines the Court's substantive proportionality analysis and distinguishes it from its earlier proceduralist approach in cases like Big Brother Watch v UK.

Academic commentary has identified client-side scanning as among the surveillance technologies whose compatibility with this reasoning remains to be resolved before the Court.[22] The strongest argument for bringing CSS within Podchasov's scope is that embedding scanning software into all users' devices creates a systematic technical capability to access private data across an entire platform, with the target list expandable through minimal reconfiguration, producing the same indiscriminate vulnerability to all users that the Court found determinative in Podchasov.[22-1] Whether the Court would accept this argument, or would instead distinguish CSS on the basis that actual access remains limited to matched content, is an open question. What is not open is the underlying structural problem: a framework that prohibits breaking encryption while permitting scanning that achieves access to content before encryption has not resolved the tension between mass content monitoring and the right to private correspondence. It has relocated it.

Proportionality Applied to Chat Control 2.0

The proportionality framework established at the outset of this piece applies to Chat Control 2.0 with the same force it applies to 1.0, and with additional weight given the permanent nature of the derogation and the expanded scope of the obligations involved.

The appropriateness criterion requires that the measure be capable of actually achieving its stated objective. Chat Control 2.0's stated objective is to prevent and combat child sexual abuse online. The technical analysis in the preceding sections establishes that the detection technologies on which the framework depends, perceptual hashing, AI classifiers for unknown CSAM, and grooming detection systems, each fail this criterion on their own terms. Hash matching cannot detect new material. AI classifiers cannot reliably determine the age of subjects in images, cannot consistently distinguish prohibited content from legal material in contextually ambiguous cases, and produce probabilistic outputs that the framework then treats as the basis for law enforcement reporting against real people without recourse. Grooming detection cannot reliably infer intent from text. Client-side scanning achieves access to content before encryption in a way that may be structurally disproportionate regardless of how selectively that access is exercised. A framework whose detection layer is not capable of reliably identifying the harms it targets does not satisfy the appropriateness requirement, and the review clause's deferred promise of future evidence-based assessment does not cure that deficiency in the interim. The permanent derogation operates now, on the basis of technology whose reliability the review clause implicitly acknowledges remains undemonstrated.

The necessity criterion requires that the measure be the least intrusive option capable of achieving the objective. The general approach does not demonstrate that less intrusive alternatives were considered and found inadequate. Targeted investigation of individuals on the basis of specific suspicion, enhanced cooperation between law enforcement agencies, removal and blocking orders for identified material, and investment in human review capacity are each less intrusive than the mass scanning of private communications at continental scale. The framework's structure, in which risk categorisation creates compliance pressure toward voluntary scanning without formal mandate, does not constitute evidence that these alternatives were assessed and rejected. It constitutes evidence that the legislative architecture was designed to produce scanning outcomes while avoiding the formal necessity analysis that a mandatory scanning obligation would have required.

The CJEU has already addressed the compatibility of general and indiscriminate data processing obligations on communications providers with the Charter in directly relevant contexts. In La Quadrature du Net and Others and Privacy International, both Grand Chamber judgments handed down on 6 October 2020, the Court held that national legislation requiring providers of electronic communications services to carry out general and indiscriminate retention or transmission of traffic and location data to security agencies is incompatible with Article 15(1) of the ePrivacy Directive [2-1] read in light of Articles 7, 8 and 52(1) of the Charter, even where national security objectives are invoked.[23][24] The Court established that any such measure must be strictly proportionate to its stated purpose and that derogations from the protection of personal data must apply only insofar as strictly necessary. Chat Control operates as a derogation from that same Article 15(1) framework. [2-2] The judgments do not directly address content scanning, as they concern metadata retention, but they establish the proportionality baseline against which any derogation from the ePrivacy Directive must be assessed, and that baseline has not been satisfied by the technical evidence examined in this piece.

The proportionality criterion requires that the benefit not be outweighed by the harm caused. However, the harm caused by the framework as structured is substantial and documented. Mass scanning of private communications without individual suspicion generates false positives at rates that, in production environments, approach 60% for image hashing. It subjects the private communications of hundreds of millions of innocent European citizens to automated analysis by systems whose accuracy and reliability have not been independently verified at scale. It creates compliance pressure toward client-side scanning, which introduces systematic technical vulnerabilities into the devices of all users of affected platforms. And it does this permanently, under a framework that provides judicial oversight only for the edge case of provider non-compliance, with no equivalent mechanism protecting citizens from overbroad or inaccurate scanning by providers who comply voluntarily.

Against these documented harms, the benefit is uncertain. The review clause's requirement for an evidence-based assessment of technology reliability within three years is an implicit acknowledgment that the evidence base does not currently exist to demonstrate the framework achieves its objectives. A permanent legal infrastructure whose proportionality depends on future evidence that has not yet been produced does not satisfy the requirements of Article 52(1) of the Charter. It defers them.

Conclusion

Chat Control 1.0 expires on the 3rd of April 2026. What replaces it is not a clean break but a continuation under different legal architecture. The temporary derogation is made permanent. The voluntary framework is embedded in a risk categorisation system that creates structured incentives toward scanning while maintaining formal deniability about its mandatory nature. The detection obligations that attracted the most criticism have been removed, with a review clause that anticipates their reintroduction once the technology can be made to appear reliable enough.

The technical picture has not improved in the interim, and the proportionality questions the temporary character of Chat Control 1.0 allowed legislators to defer must now be confronted directly. The framework does not fare well against them. The detection technologies it relies on are not capable of reliably achieving its stated objectives. Less intrusive alternatives exist and have not been demonstrated to be inadequate. The documented harms to the private communications of hundreds of millions of innocent European citizens are not outweighed by benefits whose existence the framework's own review clause implicitly treats as unproven.

The framework's structural failure is not symmetric: it is least effective precisely where the harm is greatest, and most burdensome precisely where innocence is most certain.

The most likely forum for those questions is the Court of Justice. A challenge grounded in Articles 7, 8 and 52(1) of the Charter, whether brought by a member state, a civil society organisation, or through a reference from a national court, would require the CJEU to address directly what the legislative process has consistently deferred: whether the detection technologies the framework relies on are capable of satisfying the appropriateness criterion, and whether the incentive architecture of the risk categorisation system constitutes a de facto mandatory scanning obligation that the removal of formal detection orders cannot cure. The ECtHR remains a parallel avenue, particularly if CSS becomes operational and a challenge can be grounded in Podchasov's reasoning about systematic technical capability.

The Commission's response to the Rule 144 question answered with a reference to judicial oversight and an impact assessment. Neither engages with the technical reliability question on which the appropriateness criterion depends. That question has not been answered. Under Chat Control 2.0, it has simply been deferred, while the infrastructure that the answer would need to justify is made permanent.



  1. Regulation (EU) 2021/1232 of the European Parliament and of the Council of 14 July 2021 on a temporary derogation from certain provisions of Directive 2002/58/EC as regards the use of technologies by providers of number-independent interpersonal communications services for the processing of personal and other data for the purpose of combating online child sexual abuse. https://eur-lex.europa.eu/eli/reg/2021/1232↩︎↩︎↩︎↩︎
  2. Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) http://data.europa.eu/eli/dir/2002/58/oj↩︎↩︎↩︎
  3. Article 8 of the European Convention on Human Rights (ECHR). https://fra.europa.eu/en/law-reference/european-convention-human-rights-article-8-0↩︎
  4. Article 7 of the EU Charter of Fundamental Rights - Respect for private and family life. https://fra.europa.eu/en/eu-charter/article/7-respect-private-and-family-life↩︎↩︎
  5. Article 8 of the EU Charter of Fundamental Rights - Protection of personal data. https://fra.europa.eu/en/eu-charter/article/8-protection-personal-data↩︎
  6. Article 52 of the EU Charter of Fundamental Rights - Scope and interpretation of rights and principles. http://data.europa.eu/eli/treaty/char_2016/art_52/oj↩︎
  7. Rule 144 Question for written answer  E-003645/2025 by Ewa Zajączkowska-Hernik (ESN). https://www.europarl.europa.eu/doceo/document/E-10-2025-003645_EN.html↩︎↩︎
  8. Answer given by Mr Brunner on behalf of the European Commission - 17.12.2025. https://www.europarl.europa.eu/doceo/document/E-10-2025-003645-ASW_EN.html↩︎↩︎↩︎
  9. Hany Farid Ph.D. Testimony before the House Committee on Energy and Commerce, October 16 2019. https://www.congress.gov/116/meeting/house/110075/witnesses/HHRG-116-IF16-Wstate-FaridH-20191016.pdf↩︎
  10. Commission Staff Working Document, Impact Assessment accompanying the Proposal for a Regulation laying down rules to prevent and combat child sexual abuse, SWD(2022) 209 final, 11 May 2022. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022SC0209↩︎↩︎↩︎↩︎
  11. LinkedIn Report pursuant to Article 3, Subsection (g)(vii) of Regulation (EU) 2021/1232. https://www.linkedin.com/help/linkedin/answer/a1347128↩︎
  12. Steinebach, M. (2024). Robustness and collision-resistance of PhotoDNA. Journal of Cyber Security and Mobility, 13(3), 541–564. https://doi.org/10.13052/jcsm2245-1439.1339↩︎
  13. Deryck, M., Leblanc-Albarel, D., & Preneel, B. (2026). White-Box Attacks on PhotoDNA Perceptual Hash Function. Cryptology ePrint Archive, 2026/486. https://eprint.iacr.org/2026/486
    (Currently available as a preprint; not yet peer-reviewed. The mathematical properties the attacks exploit are grounded in established properties of perceptual hash functions documented in the peer-reviewed literature.)↩︎↩︎↩︎
  14. Wang, X., Koneru, S., Venkit, P. N., Frischmann, B., & Rajtmajer, S. (2025). The unappreciated role of intent in algorithmic moderation of abusive content on social media. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-180↩︎↩︎
  15. Hamm, L., & McKeever, S. (2025). Comparing machine learning models with a focus on tone in grooming chat logs. Frontiers in Pediatrics, 13, 1591828. https://doi.org/10.3389/fped.2025.1591828↩︎↩︎
  16. Street, J., Ihianle, I. K., Olajide, F., & Lotfi, A. (2025). Enhanced online grooming detection employing context determination and message-level analysis. Intelligent Systems with Applications, 28, 200607. https://doi.org/10.1016/j.iswa.2025.200607↩︎↩︎↩︎
  17. Collin-Vézina, D., Daigneault, I., & Hébert, M. (2013). Lessons learned from child sexual abuse research: prevalence, outcomes, and preventive strategies. Child and Adolescent Psychiatry and Mental Health, 7(1), 22. https://doi.org/10.1186/1753-2000-7-22↩︎↩︎
  18. McElvaney, R. (2015). Disclosure of child sexual abuse: delays, non-disclosure and partial disclosure. Child Abuse Review, 24(3), 159–169. https://doi.org/10.1002/car.2280↩︎
  19. Proposal for a Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse - COM/2022/209 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0209↩︎↩︎↩︎
  20. Proposal for a Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse (General approach) - Partial mandate for negotiations with the European Parliament - ST 15318 2025 INIT. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CONSIL:ST_15318_2025_INIT↩︎↩︎↩︎↩︎↩︎↩︎↩︎↩︎↩︎↩︎
  21. Podchasov v. Russia, Application No. 33696/19, European Court of Human Rights, Third Section, 13 February 2024. https://hudoc.echr.coe.int/eng?i=001-230854↩︎
  22. Nahide Basri, Podchasov v Russia: a new frontier in the crypto-wars before the Strasbourg Court, International Data Privacy Law, 2025;, ipaf031, https://doi.org/10.1093/idpl/ipaf031↩︎↩︎
  23. Joined Cases C-511/18, C-512/18 and C-520/18, La Quadrature du Net and Others v Premier ministre and Others, Court of Justice of the European Union (Grand Chamber), 6 October 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62018CJ0511↩︎
  24. Case C-623/17, Privacy International v Secretary of State for Foreign and Commonwealth Affairs and Others, Court of Justice of the European Union (Grand Chamber), 6 October 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:62017CJ0623↩︎