Uncategorized

Child Safety vs Privacy? The choice is clearer than we think  

In May 2022, the European Commission published a ground-breaking proposal to tackle child sexual abuse online. It would require online platforms to scan for Child Sexual Abuse Material (CSAM) and grooming on their platforms, or risk facing fines and penalties. The proposal has caused shockwaves across the continent and even the globe, as policymakers are tasked with finally facing the reality of dangerous predators in the 21st century and how they target children.

For online platforms, the EU’s new legislation to tackle child sexual abuse online could result in a series of new scanning, transparency, and reporting obligations.  The controversial file has already evoked strong reactions from industry players who are beginning the process of understanding, and preparing for the implementation of, this complex law.  

Emma O’Mahony from Grayling Brussels’ New Technologies team will help us understand the main controversies around the new proposed rules.

***

“We need to crack down on [child sexual abuse] …but we must not interfere with end-to-end encryption” – Nancy Faeser, German Minister for Internal Affairs

Did we ever think that we, as European citizens, would be faced with such a choice?

Yet, it is the reality in 2022. I’m talking about the Commission’s proposal to tackle child sexual abuse online, a bold new legislation that could issue mandatory detection orders to online platforms that pose a significant risk of hosting Child Sexual Abuse Material (CSAM) or grooming content, meaning that they would be obliged to scan all content hosted on their platform – including private messaging – for the heinous material.

The debate in Brussels is in its very beginning stages but has already caused massive political divisions and sparked passionate outcries for child protection at any cost on the one hand, and privacy at any cost on the other. These deep divisions mean that progress on the file is sure to move at a snail’s pace, if at all.

Predators hide behind end-to-end encryption

In the 21st century, child predators hide behind end-to-end encrypted messaging and are sheltered by walls of privacy regulations, allowing them to target children virtually in plain sight and without any real-life repercussions. Europe provides a global hub for online child sexual abuse, with child safety NGO the Internet Watch Foundation highlighting that its world-class digital infrastructure and strong privacy laws have created the perfect cesspit for CSAM to proliferate.

Home Affairs Commissioner Ylva Johansson, the EU’s principal proponent of legislation tackling CSAM, has referred to the internet as a “hunting ground for predators”, and has furthermore highlighted that these predators often directly target children through private chats and videogames. This reality has been the driving factor behind the Commission’s efforts to update child safety laws in line with the new methods used by predators to either share their illegal content or directly target children.

The potential of online scanning to save a child

Those in favour of the proposal rest on a clear and undeniable argument – that the creation and dissemination of CSAM, as well as the grooming of minors, are abhorrent crimes with lifelong consequences for the victims, and serious measures must be taken to catch those guilty of them.

Some tech companies are already voluntarily scanning for CSAM on their sites using AI tools that compare images uploaded to their platforms to known CSAM. This process involves a pixel-by-pixel comparison, meaning that, unless flagged by the AI tool as CSAM, will never be seen by another person’s eyes. However, it must be noted that these technologies are not fool-proof and are still largely in their development phases.

The Internet Watch Foundation has highlighted the massive potential that online scanning represents – granting authorities access to the massive amount of CSAM that goes undetected due to the lack of mandatory scanning for the material could allow them to prosecute an unknown number of predators who would otherwise continue to victimise children. In the same vein, authorities can find children who have been victimised and remove them from the unimaginable situations in which they are subjected to the worst forms of abuse.

A dangerous precedent for data protection

Those falling on the pro-privacy end of the debate do not deny the severity of the crime at hand and the need to take far-reaching measures to tackle it. They merely believe that requiring certain platforms to scan for CSAM and grooming would represent a massive overstep of both the power of the government and the power of the platform. If authorities can peer into people’s private messaging to look for CSAM, does this mean that any future rogue government could peer into our messages to look for political dissidence, conversations about abortion where it is not legal, or workers seeking to unionise? In the face of these concerns, Commissioner Johansson has assured sceptics that the technology developed and used for the detection of CSAM will be developed to “detect child sexual abuse and child sexual abuse only”.

At the core of the pro-privacy side of the debate is the respect of the fundamental right to privacy, which many consider to be essential to allow for freedom of expression and personal development.  Additionally, in the wake of data-collection scandals such as the Facebook-Cambridge Analytica debacle, privacy advocates insist that individuals should have complete control over who can access their data.

Furthermore, from a purely technical standpoint, opponents of the legislation worry that creating a “back door” through which authorities can access end-to-end encrypted messages could create security vulnerabilities that allow hackers to access individuals’ private data.

Industry players, privacy groups, and even EU Member State governments have already stated their intention to present fierce opposition to the legislation once EU institutions begin work on it – something which can be expected for the autumn or winter of 2022.

Much ado about nothing?

This debate could be rendered virtually meaningless if feasibility concerns aren’t properly addressed. Sceptics have questioned whether the technology needed to scan private messaging for CSAM and grooming even exists, and if it does, have furthermore asked proponents how it could be ensured that the technology would not falter and result in a massive amount of false accusations of innocent parties – which could not only cause irreparable damage to the accused, but also overwhelm the very authorities that seek to protect children with a litany of bogus flagged content.

A price worth paying

The discussion of the file in Brussels is just unfolding, and privacy-conscious governments, such as those of Germany and the Netherlands, are expected to pose a strong opposition to the legislation in the council, while similarly privacy-conscious MEPs are set to water down mandatory detection requirements. The highly emotional nature of the topic means that work on the file is sure to entail difficult discussions. These factors create a high level of uncertainty for platforms – the file could move in a number of different directions, meaning that figuring out how to comply with the text, let alone predicting which obligations a platform could be subject to, is proving to be an extremely difficult task.

Privacy campaigners have genuine concerns over the feasibility of the legislation, and whether it constitutes an infringement of their right to privacy and freedom of expression. However, what seems to be getting lost in the debate is the fact that the Commission will only oblige platforms that present a substantial risk of hosting CSAM or grooming material to perform mandatory detection of their content. So, many, if not the majority, of platforms will be exempt from mandatory detection orders – particularly those that already take steps to moderate the illegal content on their own volition. Privacy advocates, if they so choose, can opt to utilise only the platforms not subject to detection orders, meaning that their private messages will nevertheless not be subject to scanning. Thus, the privacy concerns are more limited than what it seems initially.

Our approach to all things in life – including privacy – should exist in relation to the greater good. If granting an AI tool insight into our private messaging could save a child from being re-victimised, isn’t that a price worth paying?

Interested in further updates on the EU’s digital policies? Get in touch with our tech experts in BrusselsNewTechnologies@grayling.com.