Co-creating research at The AI, media, and democracy lab: Reflections on the role of academia in collaborations with media partners external link

Cools, H., Helberger, N. & Vreese, C.H. de
Journalism, 2025

Abstract

This commentary explores academia’s role in co-creating research with media partners, focusing on the distinct roles and challenges that each stakeholder brings to such partnerships. Starting from the perspective of the AI, Media, and Democracy Lab, and building on the Ethical, Legal, and Societal Aspects (ELSA) approach, we share key learnings from 3 years of collaborations with (media) partners. We conclude that navigating dual roles, expectations, output alignment, and a process of knowledge sharing are important requirements for academics and (media) partners to adequately co-create research and insights. We also argue that these key lessons do not always square with how academic research is organized and funded. We underscore that changes in funding structures and the way academic research is assessed can further facilitate the co-creation of research between academic research and projects in the media sector.

Bibtex

Article{nokey, title = {Co-creating research at The AI, media, and democracy lab: Reflections on the role of academia in collaborations with media partners}, author = {Cools, H. and Helberger, N. and Vreese, C.H. de}, url = {https://journals.sagepub.com/doi/10.1177/14648849251318622}, doi = {https://doi.org/10.1177/14648849251318622}, year = {2025}, date = {2025-02-04}, journal = {Journalism}, abstract = {This commentary explores academia’s role in co-creating research with media partners, focusing on the distinct roles and challenges that each stakeholder brings to such partnerships. Starting from the perspective of the AI, Media, and Democracy Lab, and building on the Ethical, Legal, and Societal Aspects (ELSA) approach, we share key learnings from 3 years of collaborations with (media) partners. We conclude that navigating dual roles, expectations, output alignment, and a process of knowledge sharing are important requirements for academics and (media) partners to adequately co-create research and insights. We also argue that these key lessons do not always square with how academic research is organized and funded. We underscore that changes in funding structures and the way academic research is assessed can further facilitate the co-creation of research between academic research and projects in the media sector.}, }

Towards a European Research Freedom Act: A Reform Agenda for Research Exceptions in the EU Copyright Acquis external link

2025

Abstract

This article explores the impact of EU copyright law on the use of protected knowledge resources in scientific research contexts. Surveying the current copyright/research interface, it becomes apparent that the existing legal framework fails to offer adequate balancing tools for the reconciliation of divergent interests of copyright holders and researchers. The analysis identifies structural deficiencies, such as fragmented and overly restrictive research exceptions, opaque lawful access provisions, outdated non-commercial use requirements, legal uncertainty arising from the three-step test in the EU copyright acquis, obstacles posed by the protection of paywalls and other technological measures, and exposure to contracts that override statutory research freedoms. Empirical data confirm that access barriers, use restrictions and the absence of harmonised rules for transnational research collaborations impede the work of researchers. Against this background, we advance proposals for legislative reform, in particular the introduction of a mandatory, open-ended research exemption that offers reliable breathing space for scientific research across EU Member States, the clarification of lawful access criteria, a more flexible approach to public-private partnerships, and additional rules that support modern research methods, such as text and data mining.

Copyright, open science, research exceptions, right to research, technological protection measures, text and data mining, three-step test

Bibtex

Online publication{nokey, title = {Towards a European Research Freedom Act: A Reform Agenda for Research Exceptions in the EU Copyright Acquis}, author = {Senftleben, M. and Szkalej, K. and Sganga, C. and Margoni, T.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5130069}, year = {2025}, date = {2025-02-11}, abstract = {This article explores the impact of EU copyright law on the use of protected knowledge resources in scientific research contexts. Surveying the current copyright/research interface, it becomes apparent that the existing legal framework fails to offer adequate balancing tools for the reconciliation of divergent interests of copyright holders and researchers. The analysis identifies structural deficiencies, such as fragmented and overly restrictive research exceptions, opaque lawful access provisions, outdated non-commercial use requirements, legal uncertainty arising from the three-step test in the EU copyright acquis, obstacles posed by the protection of paywalls and other technological measures, and exposure to contracts that override statutory research freedoms. Empirical data confirm that access barriers, use restrictions and the absence of harmonised rules for transnational research collaborations impede the work of researchers. Against this background, we advance proposals for legislative reform, in particular the introduction of a mandatory, open-ended research exemption that offers reliable breathing space for scientific research across EU Member States, the clarification of lawful access criteria, a more flexible approach to public-private partnerships, and additional rules that support modern research methods, such as text and data mining.}, keywords = {Copyright, open science, research exceptions, right to research, technological protection measures, text and data mining, three-step test}, }

European Copyright Society Opinion on Copyright and Generative AI external link

Dusollier, S., Kretschmer, M., Margoni, T., Mezei, P., Quintais, J. & Rognstad, O.A.
Kluwer Copyright Blog, 2025

Copyright, Generative AI

Bibtex

Online publication{nokey, title = {European Copyright Society Opinion on Copyright and Generative AI}, author = {Dusollier, S. and Kretschmer, M. and Margoni, T. and Mezei, P. and Quintais, J. and Rognstad, O.A.}, url = {https://copyrightblog.kluweriplaw.com/2025/02/07/european-copyright-society-opinion-on-copyright-and-generative-ai/}, year = {2025}, date = {2025-02-07}, journal = {Kluwer Copyright Blog}, keywords = {Copyright, Generative AI}, }

Judicial Automation: Balancing Rights Protection and Capacity-Building external link

Qiao, C. & Metikoš, L.
2025

Abstract

This entry explores the global rise of judicial automation and its implications through two dominant frameworks: rights protection and capacity-building. The rights protection framework aims to safeguard individual rights against opaque judicial automation by advocating for the use of explainable and contestable AI tools in courts. In contrast, the capacity-building framework prioritises judicial efficiency and consistency by automating court proceedings. Although these frameworks offer contrasting approaches, they are not mutually exclusive. A balance needs to be struck, where judicial automation enhances judicial capacities while maintaining transparency and accountability.

individual rights, judicial automation, judicial capacity, right to explanation

Bibtex

Online publication{nokey, title = {Judicial Automation: Balancing Rights Protection and Capacity-Building}, author = {Qiao, C. and Metikoš, L.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5125645}, year = {2025}, date = {2025-02-11}, abstract = {This entry explores the global rise of judicial automation and its implications through two dominant frameworks: rights protection and capacity-building. The rights protection framework aims to safeguard individual rights against opaque judicial automation by advocating for the use of explainable and contestable AI tools in courts. In contrast, the capacity-building framework prioritises judicial efficiency and consistency by automating court proceedings. Although these frameworks offer contrasting approaches, they are not mutually exclusive. A balance needs to be struck, where judicial automation enhances judicial capacities while maintaining transparency and accountability.}, keywords = {individual rights, judicial automation, judicial capacity, right to explanation}, }

Copyright and Generative AI: Opinion of the European Copyright Society external link

Dusollier, S., Kretschmer, M., Margoni, T., Mezei, P., Quintais, J. & Rognstad, O.A.
2025

Copyright

Bibtex

Report{nokey, title = {Copyright and Generative AI: Opinion of the European Copyright Society}, author = {Dusollier, S. and Kretschmer, M. and Margoni, T. and Mezei, P. and Quintais, J. and Rognstad, O.A.}, url = {https://europeancopyrightsociety.org/wp-content/uploads/2025/02/ecs_opinion_genai_january2025.pdf}, year = {2025}, date = {2025-02-07}, keywords = {Copyright}, }

Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus external link

Abstract

In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.

Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust

Bibtex

Working paper{nokey, title = {Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus}, author = {Weigl, L. and Bodó, B.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5116478}, year = {2025}, date = {2025-01-30}, abstract = {In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.}, keywords = {Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust}, }

Copyright Liability and Generative AI: What’s the Way Forward? external link

Abstract

This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.

Artificial intelligence, Copyright, liability

Bibtex

Article{nokey, title = {Copyright Liability and Generative AI: What’s the Way Forward?}, author = {Szkalej, K.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5117603}, year = {2025}, date = {2025-01-10}, abstract = {This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.}, keywords = {Artificial intelligence, Copyright, liability}, }

Generative AI, Copyright and the AI Act external link

Computer Law & Security Review, vol. 56, num: 106107, 2025

Abstract

This paper provides a critical analysis of the Artificial Intelligence (AI) Act's implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD). The paper then explores the AI Act's structure and key definitions relevant to copyright law. The core analysis addresses the AI Act's impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime's inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.

AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency

Bibtex

Article{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4912701}, doi = {https://doi.org/10.1016/j.clsr.2025.106107}, year = {2025}, date = {2025-01-30}, journal = {Computer Law & Security Review}, volume = {56}, number = {106107}, pages = {}, abstract = {This paper provides a critical analysis of the Artificial Intelligence (AI) Act\'s implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD). The paper then explores the AI Act\'s structure and key definitions relevant to copyright law. The core analysis addresses the AI Act\'s impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime\'s inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.}, keywords = {AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency}, }

Shifting Battlegrounds: Corporate Political Activity in the EU General Data Protection Regulation

Ocelík, V., Kolk, A. & Irion, K.
Business & Society, 2025

Abstract

Scholarship on corporate political activity (CPA) has remained largely silent on the substance of information strategies that firms utilize to influence policymakers. To address this deficiency, our study is situated in the European Union (EU), where political scientists have noted information strategies to be central to achieving lobbying success; the EU also provides a context of global norm-setting activities, especially with its General Data Protection Regulation (GDPR). Aided by recent advances in the field of unsupervised machine learning, we performed a structural topic model analysis of the entire set of lobby documents submitted during two GDPR consultations, which were obtained via a so-called Freedom of Information request. Our analysis of the substance of information strategies reveals that the two policy phases constitute “shifting battlegrounds,” where firms first seek to influence what is included and excluded in the legislation, after which they engage the more specific interests of other stakeholders. Our main theoretical contribution concerns the identification of two distinct information strategies. Furthermore, we point at the need for more attention for institutional procedures and for the role of other stakeholders’ lobbying activities in CPA research.

Bibtex

Article{nokey, title = {Shifting Battlegrounds: Corporate Political Activity in the EU General Data Protection Regulation}, author = {Ocelík, V. and Kolk, A. and Irion, K.}, doi = {https://doi.org/10.1177/00076503241306958}, year = {2025}, date = {2025-01-20}, journal = {Business & Society}, abstract = {Scholarship on corporate political activity (CPA) has remained largely silent on the substance of information strategies that firms utilize to influence policymakers. To address this deficiency, our study is situated in the European Union (EU), where political scientists have noted information strategies to be central to achieving lobbying success; the EU also provides a context of global norm-setting activities, especially with its General Data Protection Regulation (GDPR). Aided by recent advances in the field of unsupervised machine learning, we performed a structural topic model analysis of the entire set of lobby documents submitted during two GDPR consultations, which were obtained via a so-called Freedom of Information request. Our analysis of the substance of information strategies reveals that the two policy phases constitute “shifting battlegrounds,” where firms first seek to influence what is included and excluded in the legislation, after which they engage the more specific interests of other stakeholders. Our main theoretical contribution concerns the identification of two distinct information strategies. Furthermore, we point at the need for more attention for institutional procedures and for the role of other stakeholders’ lobbying activities in CPA research.}, }

Cultural Heritage Branding – Societal Costs and Benefits external link

Research Handbook on the Law and Economics of Trademark Law, Edward Elgar Publishing, 2023, pp: 178-193, ISBN: 9781786430465

Abstract

The adoption of cultural heritage signs as trademarks entails several risks that must not be underestimated. Instead of enriching language and rhetoric devices, trademark protection restricts the freedom of future generations of authors to use affected cultural signs for new literary and artistic productions. Trademark protection means that one player in the communication process has strong incentives to invest in the development of her own messages and the suppression of the messages of others. Hence, the discourse surrounding affected cultural signs is no longer as open and free as it was before. Invoking broad protection against confusion and dilution, the trademark owner can take steps to censor artistic expressions that interfere with her branding strategy. The grant of trademark rights will also lead to a commercial redefinition and devaluation of affected cultural heritage material. Once a public domain sign is no longer exclusively linked with its cultural background in the mind of the audience, an artist cannot avoid the evocation of both cultural and commercial connotations. The addition of undesirable marketing messages tarnishes the cultural dimension of the affected sign. It will erode the sign’s artistic meaning and discourse potential over time and minimize the benefits – in the sense of impulses for societal renewal – which society could have derived from critical reflections on the cultural symbol and related societal conditions.

Bibtex

Chapter{nokey, title = {Cultural Heritage Branding – Societal Costs and Benefits}, author = {Senftleben, M.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4399942}, doi = {https://doi.org/10.4337/9781786430472}, year = {2023}, date = {2023-02-05}, abstract = {The adoption of cultural heritage signs as trademarks entails several risks that must not be underestimated. Instead of enriching language and rhetoric devices, trademark protection restricts the freedom of future generations of authors to use affected cultural signs for new literary and artistic productions. Trademark protection means that one player in the communication process has strong incentives to invest in the development of her own messages and the suppression of the messages of others. Hence, the discourse surrounding affected cultural signs is no longer as open and free as it was before. Invoking broad protection against confusion and dilution, the trademark owner can take steps to censor artistic expressions that interfere with her branding strategy. The grant of trademark rights will also lead to a commercial redefinition and devaluation of affected cultural heritage material. Once a public domain sign is no longer exclusively linked with its cultural background in the mind of the audience, an artist cannot avoid the evocation of both cultural and commercial connotations. The addition of undesirable marketing messages tarnishes the cultural dimension of the affected sign. It will erode the sign’s artistic meaning and discourse potential over time and minimize the benefits – in the sense of impulses for societal renewal – which society could have derived from critical reflections on the cultural symbol and related societal conditions.}, }