Public spaces online can be both a forum for the public debate necessary to underpin our liberal democracy, as well as a recruitment and propaganda paradise for terrorist organisations, where they can prey on and radicalise vulnerable members of our society. No regulation to delete the illegal content they publish will however eradicate terrorism. It can only be one step of a comprehensive approach. We must keep this in mind, before passing a regulation that will surrender several of our fundamental freedoms, undermining our liberal democracy, to eliminate only one of several risk factors of radicalisation.

I have written a draft opinion for the Committee on Internal Market and Consumer Protection on the regulation to prevent the dissemination of terrorist content online, which I hope the European Parliament’s lead negotiator on the proposal, Dan Dalton, will take into account. My explicit goal is to safeguard our freedoms while narrowing the focus of the regulation. Even more so, as earlier this mandate we passed a terrorism directive that has only just taken effect. Before passing new sweeping legislation, let us evaluate and learn from the recent past.

The Commission’s proposal builds on the copyright reform’s unfortunate tradition of addressing all platforms that are able to host content. These range from private blogs with a comment section over company websites to social media giants, no matter whether communication is public or private. Simultaneously their impact assessment shows that less than 1% of the hosting service providers meeting their definition have ever had terrorist content uploaded to them. Without question, the blanket measures the Commission intends to transfer to 99%+ unaffected platforms are not only disproportionate, but also a threat to the fundamental freedoms of information and expression.

The proposal foresees several new obligations on platforms, the three most controversial of which I would like to address specifically:

1. Removal orders: The Speedy Gonzales approach to takedown notices


The first obligation is in principle reasonable: platform operators must be reachable to national authorities in order to respond to takedown notices of illegal content. However, the Commission would like for such content to be taken down within one hour, no matter what. This is an excessively short period of time. A longer period of time allows the platform operators to run basic plausibility checks; is the flagged upload actually illegal terrorist content, or simply a news item? Is the person who sent the removal order actually a court or administrative authority of an EU member state who is qualified to send such notices?

Individuals or small organisations cannot be expected to stand ready 24/7 to respond within one hour to a removal order that will never come, because 99%+ of platforms are never targeted with terrorist propaganda. Websites run by private individuals, small and medium-sized businesses and platforms that do not make uploads public, such as cloud services, should be excluded from the proposal altogether. We would be setting entirely wrong incentives for cloud services if we ask them to take down user uploads that contain terrorist content, because generally we would like to see them use more encryption, in order to increase the security of cloud computing. But if a cloud service offers end-to-end encryption, that means it does not have any insight into the private material users upload at all. Cloud operators cannot be obliged to police what they have no access to. Instead, only platforms that make user uploads available to the general public should be required to respond to removal orders as quickly as can be expected considering their size and resources, and even if they never receive a removal order, it is reasonable that judicial authorities should be able to contact them easily.

What is not reasonable, is that the definition of terrorist content in the proposal is so vague that it does not even specify that it only covers content that is actually illegal. It seems to be aimed at covering typical propaganda material of terrorist organisation such as Daesh, including manuals to build weapons, but it is so vaguely phrased that it could also apply to perfectly legal content such as academic or journalistic articles. Only two years ago the EU passed a comprehensive, uniform definition of what terrorist offences should be illegal throughout the entire EU, and it’s completely unclear why the Commission does not stick to this established definition of illegal terrorist acts. Rightfully so, countless civil rights groups have lamented this fact, highlighting the loss for freedom of information and expression, should legal depictions of illegal terrorist activities, such as for journalistic purposes, be censored. I cannot accept this, and will reintroduce the definition that was already agreed on only two years ago, including the important qualification that only somebody who acts intentionally can be considered to perform a terrorist offence.

2. Referrals: Reversing the rule of law


The Commission would like Europol, in addition to the national authorities, to point out terrorist content to platforms in so-called referrals. So far so good. However, Europol is not actually expected to determine the illegality of the content they find suspicious. Rather, Europol is meant to notify platforms of potential terrorist content on their sites that may or may not be illegal. Beyond the Commission’s lackluster definition of terrorist content, upon notification platforms are expected to identify terrorist content not in accordance with the law, but in accordance with their own terms of service. These terms rarely meet the thresholds of illegality in law, but are much lower. Just think of Facebook’s “community rules” that regularly lead to controversial deletions of content that Facebook deems inappropriate, such as depictions of female nudity. Requiring public authorities to help platforms enforce these private rules is a bizarre reversal of the rule of law, wherein platforms are judge, jury and executioner. In the end, whether they apply standards of law or their self-set terms of service won’t even make a difference, as freedom in deliberating deletion is an illusion. A referral from Europol or a Member State authority will de facto always be read as an order. As with copyright strikes, removing too much content is fine, but removing too little leaves the platforms in danger of liability. Only if platforms are legitimately unaware of illegal content, may they escape liability. Europol already has a similar possibility to help platforms with the enforcement of their private rules under the Europol regulation. But extending this possibility to administrative or law-enforcement authorities in the Member States means that in practice, they will be able to order the removal of content that is perfectly legal and merely violates the arbitrary rules set by a private company.

My position is clear: if a public authority finds suspicious terrorist content online, it must be up to them, not private platforms, to make the determination of illegality according to the law. Anything else is arbitrary, undermines legal certainty and the rule of law. Instead of passing on allegations, the authorities must pass on assessments. It may be reasonable to expect the 1% of companies that are actually affected by uploads of illegal terrorist content to apply duties of care, but requiring all platforms to create their own private rules on unwated terrorist content and enforce those rules at the request of public bodies is a dangerous obfuscation of responsibility for law enforcement, that even the United Nations and the Council of Europe have warned against.

3. “Proactive Measures”: formerly known as upload filters


It did not take a long time for the European Commission to fall down the slippery slope that was the introduction of upload filters in the copyright debate. Many critics of the infamous Article 13 of the copyright directive have pointed out that once such filters are considered acceptable in one area, there would be immediate demands to extend their use to policing the entire Internet. It turns out they were right: The Commission proposal says that platforms should take “proactive measures” to prevent terrorist content from being uploaded. The use of upload filters is presented as being voluntary, except of course it isn’t, as the national authorities may require platforms to install filters where the voluntary measures they’ve taken are deemed insufficient. To make matters worse, the Commission is not just talking about filters that compare uploads to a database of already known terrorist content (a practice that is problematic enough on its own, because it ignores the context in which terrorist material is uploaded and can lead to the deletion of perfectly legal journalistic content), they want to be able to force platforms to use machine learning to detect even new, previously unkown terrorist material. That is a recipe for disaster: Even an AI-based filter with an extremely high accuracy rate close to 99% will delete more legal content than illegal content, because terrorist material is extremely rare compared to the overall number of uploads.

Previous experiences with the voluntary use of such filters by big platforms have shown time and time again that they are extremely error-prone, discriminatory and tend to over-block. Much like the Commission, they will not be able to differentiate between legal and illegal content. Not only do I oppose making upload filters mandatory, it is actually illegal under EU law, which clearly states that Member States cannot impose any general monitoring obligation on hosting providers. I will fight for the deletion of “proactive measures” from the regulation and to replace them with strong safeguards to have better transparency and oversight over the cases where platforms already use such error-prone filters, to prevent false removal of perfectly legal content. First and foremost those safeguards must be human oversight, access to redress mechanisms for users and platform transparency over the voluntary actions they take. Simultaneously, I will fight ineffective and excessive punishment for platforms failing to remove individual pieces of illegal terrorist content. Such low thresholds of punishment incentivise overblocking. Instead, punishment should be applied where systematic and ongoing violations of a platform’s obligations can be determined, such as when a platform consistently does not react to removal orders from a court within a reasonable time.

Addressing Legitimate Problems with Transparency and Law Enforcement

Beyond the problem areas I have described, there are several articles which could bring meaningful progress to our policing of terrorist content online, if amended properly. Platform user experience and transparency in particular would be improved.

Today, when platforms identify (potentially) illegal content and delete it on the basis of their terms of service instead of the law, the upload is usually gone for good and with it valuable information for law encorcement to pursue and try the uploader. I agree with the proposal that platforms should save available data connected to the removed illegal content for a reasonable period of time to allow law enforcement follow-up and to allow users to complain about wrongful removals of their content and have it reinstated. In that sense, removing terrorist content on the basis of the law instead of arbitrarily on the basis of a company’s terms of service can actually increase our security as well as protect our fundamental rights. After all, making sure that a suspect who has given signs online that they may be planning an attack is found and stopped in time is a lot more important than making sure that nobody can read their message online. The goal is to prevent the perpetrators from evading justice. I also agree with the idea to require platforms to be transparent about their measures against terrorist content, including informing users whose uploads have been removed or blocked of the reasons for the removal and about how they can complain and have their content reinstated.

Thanks to these positive elements, I have decided not to dismiss the proposed regulation outright, as there are certainly parts which would solve legitimate existing issues. However, the remaining mandate is short and we should under no circumstances accept the Commission proposal without substantial changes to avoid blatant fundamental rights abuses in the name of fighting terrorism. If no broad consensus on this proposal can be reached, it would be better to leave it to the next European Parliament to deal with it thoroughly.

Timeline & Shadow Rapporteurs

The other political groups have now tabled their amendments to my draft opinion, and most of them agree that the Commission proposal goes too far. I will now have to negotiate with them to find a compromise permission, but I have made it clear that mandatory upload filters will never be acceptable to me. The next public discussion on the amendments will take place in the committee meeting on 20 February.

My negotiating partners in the other political groups are Eva Maydell (EPP), Lucy Anderson (S&D), Jasenko Selimović (ALDE), Jiří Maštálka (GUE) and Daniel Dalton (ECR), who is also the file’s rapporteur in the lead LIBE committee. You can find his draft report here and an overview of the file here.

To the extent possible under law, the creator has waived all copyright and related or neighboring rights to this work.

Comments closed.