Algorithmic propagation: How the data-platform regulatory framework may increase bias in content moderation

Abstract

This chapter offers a reflection on the topic of content moderation and bias mitigation measures in copyright law. It explores the possible links between conditional data access regimes and content moderation performed through data-intensive technologies such as fingerprinting and machine learning algorithms. In recent years, various pressing questions surrounding automated decision-making and their legal implications materialised. In European Union (EU) law, answers were provided through different regulatory interventions often based on specific legal categories, rights, and foundations contributing to the increasing complexity of interacting frameworks. Within this broader background, the chapter discusses whether current EU copyright rules may have the effect of favouring what we call the propagation of bias present in input data to the output algorithmic tools employed for content moderation. The chapter shows that a reduced availability and transparency of training data often leads to negative effects on access, verification and replication of results. These are ideal conditions for the development of bias and other types of systematic errors to the detriment of users' rights. The chapter discusses a number of options that could be employed to mitigate this undesirable effect and contextually preserve the many fundamental rights at stake.

Bibtex

Chapter{nokey, title = {Algorithmic propagation: How the data-platform regulatory framework may increase bias in content moderation}, author = {Margoni, T. and Quintais, J. and Schwemer, S.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913758}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {This chapter offers a reflection on the topic of content moderation and bias mitigation measures in copyright law. It explores the possible links between conditional data access regimes and content moderation performed through data-intensive technologies such as fingerprinting and machine learning algorithms. In recent years, various pressing questions surrounding automated decision-making and their legal implications materialised. In European Union (EU) law, answers were provided through different regulatory interventions often based on specific legal categories, rights, and foundations contributing to the increasing complexity of interacting frameworks. Within this broader background, the chapter discusses whether current EU copyright rules may have the effect of favouring what we call the propagation of bias present in input data to the output algorithmic tools employed for content moderation. The chapter shows that a reduced availability and transparency of training data often leads to negative effects on access, verification and replication of results. These are ideal conditions for the development of bias and other types of systematic errors to the detriment of users\' rights. The chapter discusses a number of options that could be employed to mitigate this undesirable effect and contextually preserve the many fundamental rights at stake.}, }