Project awarded with NWO-Veni grant
Due to the unprecedented spread of illegal and harmful content online, EU law is changing. New rules enhance hosting platforms’ obligations to police content and censor speech, for which they increasingly rely on algorithms. João’sproject examines the responsibility of platforms in this context from a fundamental rights perspective.
Hosting platforms—like Facebook, Twitter or YouTube—are the gateways to information in the digital age. They regulate access to content through a range of ‘moderation’ activities, including recommendation, removal, and filtering. These activities are governed by a mix of public and private rules stemming from the law and platforms’ internal norms, such as Terms of Use (TOS) and Community Guidelines.
In light of the unprecedented spread of illegal and harmful content online, the EU and Member States have in recent years enacted legislation enhancing the responsibility of platforms and pushing them towards content moderation. These rules are problematic because they enlist private platforms to police content and censor speech without providing adequate fundamental rights safeguards. The problem is amplified because to cope with the massive amounts of content hosted, moderation increasingly relies on Artificial Intelligence (AI) systems.
In parallel, the EU is ramping up efforts to regulate the development and use of AI systems. However, at EU level, there is little policy or academic discussion on how the regulation of AI affects content moderation and vice-versa. This project focuses on this underexplored intersection, asking the question: How should we understand, theorize, and evaluate the responsibility of hosting platforms in EU law for algorithmic content moderation, while safeguarding freedom of expression and due process?
João’s project answers this question by combining doctrinal legal research, empirical methods, and normative evaluation. First, the research maps and assesses EU law and policies on platform’s responsibility for algorithmic moderation of illegal content, including three sectoral case-studies: terrorist content, hate speech, and copyright infringement. Second, the empirical research consists of qualitative content analysis of platforms’ TOS and Community Guidelines. Finally, the project evaluates the responsibility implications of algorithmic moderation from a fundamental rights perspective and offers recommendations for adequate safeguards.