Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus external link

Abstract

In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.

Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust

Bibtex

Working paper{nokey, title = {Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus}, author = {Weigl, L. and Bodó, B.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5116478}, year = {2025}, date = {2025-01-30}, abstract = {In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.}, keywords = {Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust}, }

Copyright Liability and Generative AI: What’s the Way Forward? external link

Abstract

This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.

Artificial intelligence, Copyright, liability

Bibtex

Article{nokey, title = {Copyright Liability and Generative AI: What’s the Way Forward?}, author = {Szkalej, K.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5117603}, year = {2025}, date = {2025-01-10}, abstract = {This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.}, keywords = {Artificial intelligence, Copyright, liability}, }

Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations download

Chicago-Kent Law Review (forthcoming), 2024

Abstract

This essay explores AI-assisted content creation in light of EU and U.S. copyright law. The essay revisits a 2020 study commissioned by the European Commission, which was written before the surge of generative AI. Drawing from traditional legal doctrines, such as the idea/expression dichotomy and its equivalents in Europe, the author argues that iterative prompting may lead to copyright protection of GenAI-assisted output. The paper critiques recent U.S. Copyright Office guidelines that severely restrict registration of works created with the aid of GenAI. Human input, particularly in the conceptual and redaction phases, provides sufficient creative control to justify copyright protection of many AI-assisted works. With many of the expressive features being machine-generated, the scope of copyright protection of such works should, however, remain fairly narrow.

Artificial intelligence, artistic expression, Copyright

Bibtex

Article{nokey, title = {Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations}, author = {Hugenholtz, P.}, url = {https://www.ivir.nl/nl/publications/copyright-and-the-expression-engine-idea-and-expression-in-ai-assisted-creations/chicagokentlawreview2024/}, year = {2024}, date = {2024-11-05}, journal = {Chicago-Kent Law Review (forthcoming)}, abstract = {This essay explores AI-assisted content creation in light of EU and U.S. copyright law. The essay revisits a 2020 study commissioned by the European Commission, which was written before the surge of generative AI. Drawing from traditional legal doctrines, such as the idea/expression dichotomy and its equivalents in Europe, the author argues that iterative prompting may lead to copyright protection of GenAI-assisted output. The paper critiques recent U.S. Copyright Office guidelines that severely restrict registration of works created with the aid of GenAI. Human input, particularly in the conceptual and redaction phases, provides sufficient creative control to justify copyright protection of many AI-assisted works. With many of the expressive features being machine-generated, the scope of copyright protection of such works should, however, remain fairly narrow.}, keywords = {Artificial intelligence, artistic expression, Copyright}, }

De Grondwet en Artifciële Intelligentie external link

De Grondwet en nieuwe technologie: klaar voor de toekomst?: Twaalf pleidooien voor modernisering van de Grondwet, Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, 2024, pp: 69-83

Artificial intelligence, Fundamental rights

Bibtex

Chapter{nokey, title = {De Grondwet en Artifciële Intelligentie}, author = {Dommering, E.}, url = {https://open.overheid.nl/documenten/9172451e-8e06-43a7-aed5-12f9a5233c3f/file}, year = {2024}, date = {2024-08-01}, keywords = {Artificial intelligence, Fundamental rights}, }

Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke external link

Kluwer Copyright Blog, 2024

Artificial intelligence, Germany, text and data mining

Bibtex

Online publication{nokey, title = {Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke}, author = {Keller, P.}, url = {https://copyrightblog.kluweriplaw.com/2024/07/22/machine-readable-or-not-notes-on-the-hearing-in-laion-e-v-vs-kneschke/}, year = {2024}, date = {2024-07-22}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Germany, text and data mining}, }

How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

Bibtex

Article{nokey, title = {How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {2024}, date = {2024-01-23}, journal = {Berkeley Technology Law Journal}, volume = {38}, issue = {3}, pages = {933-1010}, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content}, }

EU copyright law round up – fourth trimester of 2023 external link

Trapova, A. & Quintais, J.
Kluwer Copyright Blog, 2024

Artificial intelligence, Copyright, EU

Bibtex

Online publication{nokey, title = {EU copyright law round up – fourth trimester of 2023}, author = {Trapova, A. and Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2024/01/04/eu-copyright-law-round-up-fourth-trimester-of-2023/}, year = {2024}, date = {2024-01-04}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Copyright, EU}, }

Artificiële Intelligentie: waar is de werkelijkheid gebleven? download

Computerrecht, iss. : 6, num: 258, pp: 476-483, 2023

Abstract

Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.

Artificial intelligence

Bibtex

Article{nokey, title = {Artificiële Intelligentie: waar is de werkelijkheid gebleven?}, author = {Dommering, E.}, url = {https://www.ivir.nl/nl/publications/artificiele-intelligentie-waar-is-de-werkelijkheid-gebleven/ai-computerrecht-2023/}, year = {2023}, date = {2023-12-05}, journal = {Computerrecht}, issue = {6}, number = {258}, abstract = {Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.}, keywords = {Artificial intelligence}, }

An Interdisciplinary Toolbox for Researching the AI-Act external link

Verfassungsblog, 2023

Artificial intelligence

Bibtex

Online publication{nokey, title = {An Interdisciplinary Toolbox for Researching the AI-Act}, author = {Metikoš, L.}, url = {https://verfassungsblog.de/an-interdisciplinary-toolbox-for-researching-the-ai-act/}, doi = {https://doi.org/10.17176/20230908-062850-0}, year = {2023}, date = {2023-09-08}, journal = {Verfassungsblog}, keywords = {Artificial intelligence}, }

Generative AI, Copyright and the AI Act external link

Kluwer Copyright Blog, 2023

Abstract

Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.

Artificial intelligence, Copyright

Bibtex

Online publication{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2023/05/09/generative-ai-copyright-and-the-ai-act/}, year = {2023}, date = {2023-05-09}, journal = {Kluwer Copyright Blog}, abstract = {Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.}, keywords = {Artificial intelligence, Copyright}, }