Anyone wishing to gauge the extent of the European Union's regulatory drift will need to read Articles 34 and 35 of the Digital Services Act (DSA). Given their length it is impossible to quote them in full here, so here is an extract:
DSA Article 34, "Risk assessment":
"1. Providers of very large online platforms and of very large online search engines shall diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems (...) and shall include the following systemic risks (...) (a) the dissemination of illegal content through their services (which includes 'hate speech'); (b) any actual or foreseeable negative effects for the exercise of fundamental rights, in particular the fundamental rights (...) to non-discrimination; (c) any actual or foreseeable negative effects on civic discourse and electoral processes, and public security; (d) any actual or foreseeable negative effects in relation to (...) public health (...) and serious negative consequences to the person's physical and mental well-being (...)."
Article 35, "Mitigation of risks," obliges these platforms to take a whole arsenal of preventive and repressive measures, basically to prevent the sharing of information that displeases the European Commission. In short, the idea is to force these platforms to pay hordes of patrol officers to relentlessly hunt down opinions that do not please the European Lord. The preventive nature of these measures means that they can be described as censorship in the strict sense. What's more, general censorship, because the terms used by the European legislator - hate, non-discrimination, civic discourse, electoral process, public security, public health, well-being - are so vague that censors with (digital) scissors do cut wherever they please, at the whim of the European Prince.
Join the conversation as a VIP Member