
The EU’s Updated Child Safety Rules: Why Protecting Kids Must Not Come at the Cost of Mass Surveillance
Categories: Data Privacy, Encryption, Government, Policy
The European Union is once again reshaping its legal approach to fighting online child sexual abuse material (CSAM). On the surface, every country agrees on the mission: protect children from exploitation, abuse, and increasingly AI-generated child sexual abuse material. But the real debate is how to protect children – without undermining the privacy and security of every EU resident.
Recent updates from the European Parliament and Council show a clear trend: child safety and privacy must not be treated as opposing goals. Instead, legislation must recognize both as fundamental rights.
This article breaks down the EU’s updated position, the growing concerns around “chat control,” and what these developments mean for ordinary users, platforms, and privacy-focused organizations.
A Surge in Online Child Abuse Content
The European Parliament notes that over 36.2 million reports of suspected online child sexual abuse were recorded in 2023 – the highest ever documented.
Much of this growth involves increasingly younger victims and new vectors such as:
- AI-generated child abuse content
- Live-streamed abuse
- Sophisticated cross-platform grooming
- Encrypted channels used by organized networks
This is a real, urgent, and undisputed crisis and must be addressed effectively.
However, the way this crisis is managed and resolved matters. Imagine if you were banned from having a closable front door because you live in a place with high crime rates – this way, authorities can see into your home at any time, just in case. In the case of online child safety, solutions often hinge on compromising everyone’s privacy.
The key question here is: how do we protect children, at scale, without breaking encryption, scanning everyone’s private messages, or enabling mass surveillance?
🇪🇺 The EU’s Proposed Approach: Risk-Based, Not Surveillance-Based
According to the Parliament’s updated position, the EU aims to prevent and combat child sexual abuse while upholding fundamental rights, including privacy.
Critically, the Parliament explicitly rejects:
- blanket scanning of private messages
- mass “chat control”
- any requirement for providers to build encryption backdoors
- continuous monitoring of all digital communication
Instead, the proposed regulation focuses on a “graduated, risk-based model” – a system that imposes requirements based on the risk level of the platform or online environment in question. The approach can be roughly broken into three components:
1. Mandatory risk assessments
Platforms must analyze the likelihood of CSAM appearing on their service. High-risk services must take proportionate steps.
2. Safety-by-design requirements
Platforms must integrate features such as:
- parental controls
- abuse reporting tools
- certain age-verification tools (with safeguards)
3. Detection orders only as a last resort
A judge may issue a targeted, time-limited detection order if:
- there is reasonable suspicion against a specific individual or group
- other mitigation measures failed
- the material is not end-to-end encrypted text (these are excluded)
This is a far cry from the originally feared “mass chat scanning” model.
The Split Among Member States: “Chat Control” Concerns
Despite privacy protections introduced by Parliament, several EU member states remain divided.
A recent analysis (European Newsroom, Oct 2025) showed:
- Some member states still prefer aggressive scanning, believing it’s the only way to protect children.
- Others warn that any mass scanning approach risks undermining encryption, citizen privacy, and even journalist-source protections.
- Several messaging platforms have publicly opposed the Commission’s earlier “chat control” language, arguing it would force them to weaken encryption globally.
This division is documented in detail in the European Newsroom report, which describes the CSAR proposal as “splitting the bloc’s 27 countries” because of deep privacy concerns.
Why Privacy Advocates Are Still Concerned
Even if Parliament’s version avoids mass scanning, two significant risks remain:
1. Detection orders could expand over time
Even targeted tools set dangerous precedents if not tightly controlled. Take license plate cameras, a tool for law enforcement to do their job. However, in reality they have a long history of misuse; in one case a police lieutenant misused Flock cameras to stalk his estranged wife.
2. Age verification systems can become de facto ID systems
If poorly implemented, they may erode anonymity across the EU web – something privacy regulators have warned about in other contexts.
ID to access the web is a huge point of contention right now; multiple major platforms like YouTube and Discord have already started rolling it out, largely in response to legislation from the UK, EU & Australia. 70,000 ID’s shared for Discord verification were leaked recently, and another ~13,000 verification selfies and ID photos from the Tea app.
Both ID leaks put tens of thousands of people at risk of targeted crime. This is likely a driving force behind why both EU and Australian regulators are now pushing against explicit ID requirements, with Australia’s Department of Infrastructure stating;
No Australian will be compelled to use government identification (including Digital ID) to prove their age online, and platforms must offer reasonable alternatives
Privacy & Child Safety Must Co-Exist (Not Compete)
EU & Australian parliamentary bodies seem to agree on this point; with the EU’s updated position recognizing that:
Protecting children does not require putting every EU citizen under surveillance.
Multiple pages of the Parliament’s document reaffirm fundamental rights such as:
- the right to privacy
- the right to secure encryption
- the right to private communication
The Council’s position also includes safeguards, but privacy groups still say it leaves major risks.
Keep Your Data Private While Staying Safe
On most mainstream platforms, a huge amount of what you do is logged, profiled, and monetized (or sold).
Regardless of legislative changes, companies will always be motivated to track you however they can (see: Meta & Yandex Localhost Tracking). History shows they’re unlikely to prioritize your informed consent (see: Meta begins AI training on EU user data). The solution? Minimize the amount of data they have access to. For social media platforms, your first step should be deleting as much content as possible, especially if it contains sensitive data.
That process can take days or longer depending on your digital footprint; which is why we built Redact.dev; the world’s most comprehensive digital footprint management tool that can wipe old content across all major platforms in a few clicks. You can try it for free on Facebook, Reddit, Discord, and Twitter/X content.
The EU’s New Tech Reality: AI, Deepfakes & Live Abuse
The EU’s 2024–2025 updates explicitly acknowledge new technological threats:
- AI-generated CSAM
- Live-streamed abuse
- Deepfake minors
- AI-enabled grooming patterns
The European Parliament’s June 2025 position would criminalize the use of AI systems adapted for child sexual abuse, as well as expand law-enforcement capabilities to include:
- undercover digital operations
- honeypot accounts
- covert surveillance in highly targeted scenarios
This modernization effort is referenced in the Parliament document on page 5.
Why Privacy-Preserving Safety Measures Work Better Long-Term
Mass scanning – whether by AI or by hashing algorithms – creates unacceptable risks:
- Backdoors weaken encryption globally
- “Temporary” surveillance powers often become permanent
- False positives generate harm
- Criminals bypass scanning, while everyday people lose privacy
Targeted, court-authorized, proportionate measures work better. They avoid infringing on the rights of millions while enabling law enforcement to act where it matters.
This is why both privacy groups and many EU lawmakers now lean toward:
- platform-level risk mitigation
- targeted detection orders
- better victim support
- safer platform design
- stronger cross-border cooperation
…instead of indiscriminate monitoring.
For readers interested in broader consumer-rights trends, see our article on global shifts in privacy legislation.
Victims’ Rights Are Expanding – A Critical Step Forward
A less-discussed but essential element of the proposal is the creation of an EU Centre for Child Protection, which would:
- receive reports
- analyse CSAM
- forward cases to national authorities
- support victims trying to remove harmful content
- coordinate investigations across borders
This is a significant victory for survivors – and one that does not require mass surveillance to be effective.
Final Analysis: The EU Is Moving Toward a More Balanced Framework
Based on the most recent Parliament and Council positions, the EU appears to be shifting away from earlier “chat control” fears and toward a privacy-preserving child protection model. However, advocacy groups still warn that even the revised position could enable de facto scanning creep if not tightly bound.
The updated approach values:
✔️ Child safety as a fundamental, non-negotiable priority
✔️ Privacy and encryption as essential digital rights
✔️ Targeted enforcement – not blanket scanning
✔️ Transparency, oversight, and proportionality
✔️ Regulation focused on risk, not mass surveillance
But as negotiations continue into 2026, pressure from civil society, privacy experts, and human-rights organizations will be essential to ensure the final law maintains this balance.

