Fundamental rights assessment of the framework for detection orders under the CSAM proposal download

CSAM, Data protection, Freedom of expression, Privacy

Bibtex

Report{nokey, title = {Fundamental rights assessment of the framework for detection orders under the CSAM proposal}, author = {van Daalen, O.}, url = {https://www.ivir.nl/nl/publications/fundamental-rights-assessment-of-the-framework-for-detection-orders-under-the-csam-proposal/csamreport/}, year = {2023}, date = {2023-04-22}, keywords = {CSAM, Data protection, Freedom of expression, Privacy}, }

Generative AI, Copyright and the AI Act external link

Kluwer Copyright Blog, 2023

Abstract

Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.

Artificial intelligence, Copyright

Bibtex

Online publication{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2023/05/09/generative-ai-copyright-and-the-ai-act/}, year = {2023}, date = {2023-05-09}, journal = {Kluwer Copyright Blog}, abstract = {Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.}, keywords = {Artificial intelligence, Copyright}, }

Mediating the Tension between Data Sharing and Privacy: The Case of DMA and GDPR external link

Weigl, L., Barbereau, T., Sedlmeir, J. & Zavolokina, L.
ECIS 2023 Research-in-Progress Papers, 2023

Abstract

The Digital Markets Act (DMA) constitutes a crucial part of the European legislative framework addressing the dominance of ‘Big Tech’. It intends to foster fairness and competition in Europe’s digital platform economy by imposing obligations on ‘gatekeepers’ to share end-user-related information with business users. Yet, this may involve the processing of personal data subject to the General Data Protection Regulation (GDPR). The obligation to provide access to personal data in a GDPR-compliant manner poses a regulatory and technical challenge and can serve as a justification for gatekeepers to refrain from data sharing. In this research-in-progress paper, we analyze key tensions between the DMA and the GDPR through the paradox perspective. We argue through a task-technology fit approach how privacyenhancing technologies–particularly anonymization techniques–and portability could help mediate tensions between data sharing and privacy. Our contribution provides theoretical and practical insights to facilitate legal compliance.

Bibtex

Conference paper{nokey, title = {Mediating the Tension between Data Sharing and Privacy: The Case of DMA and GDPR}, author = {Weigl, L. and Barbereau, T. and Sedlmeir, J. and Zavolokina, L.}, url = {https://aisel.aisnet.org/ecis2023_rip/49/}, year = {2023}, date = {2023-05-02}, journal = {ECIS 2023 Research-in-Progress Papers}, abstract = {The Digital Markets Act (DMA) constitutes a crucial part of the European legislative framework addressing the dominance of ‘Big Tech’. It intends to foster fairness and competition in Europe’s digital platform economy by imposing obligations on ‘gatekeepers’ to share end-user-related information with business users. Yet, this may involve the processing of personal data subject to the General Data Protection Regulation (GDPR). The obligation to provide access to personal data in a GDPR-compliant manner poses a regulatory and technical challenge and can serve as a justification for gatekeepers to refrain from data sharing. In this research-in-progress paper, we analyze key tensions between the DMA and the GDPR through the paradox perspective. We argue through a task-technology fit approach how privacyenhancing technologies–particularly anonymization techniques–and portability could help mediate tensions between data sharing and privacy. Our contribution provides theoretical and practical insights to facilitate legal compliance.}, }

A Primer and FAQ on Copyright Law and Generative AI for News Media external link

Quintais, J. & Diakopoulos, N.
2023

Artificial intelligence, Copyright, Media law, news

Bibtex

Online publication{nokey, title = {A Primer and FAQ on Copyright Law and Generative AI for News Media}, author = {Quintais, J. and Diakopoulos, N.}, url = {https://generative-ai-newsroom.com/a-primer-and-faq-on-copyright-law-and-generative-ai-for-news-media-f1349f514883}, year = {2023}, date = {2023-04-26}, keywords = {Artificial intelligence, Copyright, Media law, news}, }

Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act external link

Senftleben, M., Quintais, J. & Meiring, A.

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content

Bibtex

Online publication{nokey, title = {Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content}, }

Territoriality Roundtables (combined report) download

Abstract

This report summarizes the outcome of two roundtables held with expert legal scholars on the need for a unified European copyright. Issues discussed include various models for a unitary copyright title and fundamental rights aspects. The Roundtables are part of a strand of the Recreating Europe project that queries how the territorial nature of copyright and related rights can hinder the realisation of the digital single market. While for e.g., trademarks and designs the EU has legislated community wide rights that extend across borders of individual Member States, copyright and related rights remain national at heart. Authors, performers, phonogram producers, database producers and other related rights owners all acquire bundles of national rights in their respective (intellectual) productions. Despite far-reaching harmonization of the subject-matter, scope and duration of national rights, these rights remain restricted in their existence and exploitation to the geographic boundaries of the individual Member States under whose laws they arise, i.e., they are territorial.

Copyright, Digital Single Market, EU law, Intellectual property, unitary title

Bibtex

Report{nokey, title = {Territoriality Roundtables (combined report)}, author = {van Eechoud, M.}, url = {https://www.ivir.nl/nl/publications/territoriality-roundtables-combined-report/territoriality-roundtables-reportfinal870626_d4_4/}, doi = {https://doi.org/10.5281/zenodo.7564660}, year = {2022}, date = {2022-12-14}, abstract = {This report summarizes the outcome of two roundtables held with expert legal scholars on the need for a unified European copyright. Issues discussed include various models for a unitary copyright title and fundamental rights aspects. The Roundtables are part of a strand of the Recreating Europe project that queries how the territorial nature of copyright and related rights can hinder the realisation of the digital single market. While for e.g., trademarks and designs the EU has legislated community wide rights that extend across borders of individual Member States, copyright and related rights remain national at heart. Authors, performers, phonogram producers, database producers and other related rights owners all acquire bundles of national rights in their respective (intellectual) productions. Despite far-reaching harmonization of the subject-matter, scope and duration of national rights, these rights remain restricted in their existence and exploitation to the geographic boundaries of the individual Member States under whose laws they arise, i.e., they are territorial.}, keywords = {Copyright, Digital Single Market, EU law, Intellectual property, unitary title}, }

Shielding citizens? Understanding the impact of political advertisement transparency information

Dobber, T., Kruikemeier, S., Helberger, N. & Goodman, E.
New Media & Society, 2023

Abstract

Online targeted advertising leverages an information asymmetry between the advertiser and the recipient. Policymakers in the European Union and the United States aim to decrease this asymmetry by requiring information transparency information alongside political advertisements, in the hope of activating citizens’ persuasion knowledge. However, the proposed regulations all present different directions with regard to the required content of transparency information. Consequently, not all proposed interventions will be (equally) effective. Moreover, there is a chance that transparent information has additional consequences, such as increasing privacy concerns or decreasing advertising effectiveness. Using an online experiment (N = 1331), this study addresses these challenges and finds that two regulatory interventions (DSA and HAA) increase persuasion knowledge, while the chance of raising privacy concerns or lowering advertisement effectiveness is present but slim. Results suggest transparency information interventions have some promise, but at the same time underline the limitations of user-facing transparency interventions.

information disclosures, online advertising, persuasion knowledge, political attitudes, Privacy, Transparency

Bibtex

Article{nokey, title = {Shielding citizens? Understanding the impact of political advertisement transparency information}, author = {Dobber, T. and Kruikemeier, S. and Helberger, N. and Goodman, E.}, doi = {https://doi.org/10.1177/14614448231157640}, year = {2023}, date = {2023-04-21}, journal = {New Media & Society}, abstract = {Online targeted advertising leverages an information asymmetry between the advertiser and the recipient. Policymakers in the European Union and the United States aim to decrease this asymmetry by requiring information transparency information alongside political advertisements, in the hope of activating citizens’ persuasion knowledge. However, the proposed regulations all present different directions with regard to the required content of transparency information. Consequently, not all proposed interventions will be (equally) effective. Moreover, there is a chance that transparent information has additional consequences, such as increasing privacy concerns or decreasing advertising effectiveness. Using an online experiment (N = 1331), this study addresses these challenges and finds that two regulatory interventions (DSA and HAA) increase persuasion knowledge, while the chance of raising privacy concerns or lowering advertisement effectiveness is present but slim. Results suggest transparency information interventions have some promise, but at the same time underline the limitations of user-facing transparency interventions.}, keywords = {information disclosures, online advertising, persuasion knowledge, political attitudes, Privacy, Transparency}, }

Export control of cybersurveillance items in the new dual-use regulation: The challenges of applying human rights logic to export control external link

Computer Law & Security Review, vol. 48, 2023

Abstract

In 2021, the Recast Dual-Use Regulation entered into force. The regulation includes a heavily debated new provision on the export control of so-called cybersurveillance items. This provision departs from the traditional logic of export control rules in multiple ways. Most importantly, it positions human rights considerations as an important factor in the export control of a flexible range of technologies. This article explores the operation, implications and challenges of this new human rights-orientated approach to export control of digital surveillance technologies. Taking the definition of cybersurveillance items as a starting point of the analysis, the article draws on surveillance-related case law of the European Court of Human Rights and the Court of Justice of the European Union, to define the potential scope of application of the open-ended cybersurveillance concept of the Regulation. By exploring how this concept maps to technologies often connected with human rights infringements, such as facial recognition, location tracking and open-source intelligence, the article highlights the challenges of applying this new approach and underscores the need for its further development in practice.

cybersurveillance, Human rights, Regulation

Bibtex

Article{nokey, title = {Export control of cybersurveillance items in the new dual-use regulation: The challenges of applying human rights logic to export control}, author = {van Daalen, O. and van Hoboken, J. and Rucz, M.}, doi = {https://doi.org/10.1016/j.clsr.2022.105789}, year = {2023}, date = {2023-04-21}, journal = {Computer Law & Security Review}, volume = {48}, pages = {}, abstract = {In 2021, the Recast Dual-Use Regulation entered into force. The regulation includes a heavily debated new provision on the export control of so-called cybersurveillance items. This provision departs from the traditional logic of export control rules in multiple ways. Most importantly, it positions human rights considerations as an important factor in the export control of a flexible range of technologies. This article explores the operation, implications and challenges of this new human rights-orientated approach to export control of digital surveillance technologies. Taking the definition of cybersurveillance items as a starting point of the analysis, the article draws on surveillance-related case law of the European Court of Human Rights and the Court of Justice of the European Union, to define the potential scope of application of the open-ended cybersurveillance concept of the Regulation. By exploring how this concept maps to technologies often connected with human rights infringements, such as facial recognition, location tracking and open-source intelligence, the article highlights the challenges of applying this new approach and underscores the need for its further development in practice.}, keywords = {cybersurveillance, Human rights, Regulation}, }

Seeing what others are seeing: Studies in the regulation of transparency for social media recommender systems external link

Abstract

This dissertation asks how the law can shed light on social media recommender systems: the algorithmic tools which platforms use to rank and curate online content. Recommender systems fulfil an important gatekeeping function in social media governance, but their actions are poorly understood. Legal reforms are now underway in EU law to impose transparency rules on social media recommenders, and the goal of this dissertation is to interrogate the accountability relations implied by this regulatory project. What kinds of information is the law demanding about social media recommender systems? Who is included in these new models of accountability, and who is excluded? This dissertation critiques a dominant paradigm in recent law and policy focused on algorithmic explanations. Building on insights from critical transparency studies and algorithm studies, it argues that disclosure regulation should move from algorithmic transparency toward platform observability: approaching recommenders not as discrete algorithmic artifacts but as complex sociotechnical systems shaped in important ways by their users and operators. Before any attempt to ‘open the black box’ of algorithmic machine learning, therefore, regulating for observability invites us to ask how recommenders find uptake in practice; to demand basic data on recommender inputs, outputs and interventions; to ask what is being recommend, sooner than why. Several avenues for observability regulation are explored, including platform ad archives; notices for visibility restrictions (or ‘shadow bans’); and researcher APIs. Through solutions such as these, which render visible recommender outcomes, this dissertation outlines a vision for a more democratic media governance—one which supports informed and inclusive deliberation about, across and within social media’s personalised publics.

Bibtex

PhD Thesis{nokey, title = {Seeing what others are seeing: Studies in the regulation of transparency for social media recommender systems}, author = {Leerssen, P.}, url = {https://dare.uva.nl/search?identifier=18c6e9a0-1530-4e70-b9a6-35fb37873d13}, year = {2023}, date = {2023-04-21}, abstract = {This dissertation asks how the law can shed light on social media recommender systems: the algorithmic tools which platforms use to rank and curate online content. Recommender systems fulfil an important gatekeeping function in social media governance, but their actions are poorly understood. Legal reforms are now underway in EU law to impose transparency rules on social media recommenders, and the goal of this dissertation is to interrogate the accountability relations implied by this regulatory project. What kinds of information is the law demanding about social media recommender systems? Who is included in these new models of accountability, and who is excluded? This dissertation critiques a dominant paradigm in recent law and policy focused on algorithmic explanations. Building on insights from critical transparency studies and algorithm studies, it argues that disclosure regulation should move from algorithmic transparency toward platform observability: approaching recommenders not as discrete algorithmic artifacts but as complex sociotechnical systems shaped in important ways by their users and operators. Before any attempt to ‘open the black box’ of algorithmic machine learning, therefore, regulating for observability invites us to ask how recommenders find uptake in practice; to demand basic data on recommender inputs, outputs and interventions; to ask what is being recommend, sooner than why. Several avenues for observability regulation are explored, including platform ad archives; notices for visibility restrictions (or ‘shadow bans’); and researcher APIs. Through solutions such as these, which render visible recommender outcomes, this dissertation outlines a vision for a more democratic media governance—one which supports informed and inclusive deliberation about, across and within social media’s personalised publics.}, }

Money after money: Disassembling value/information infrastructures external link

Ferrari, V.
2023

Abstract

This manuscript is a journey through coexisting, emerging or speculated about, types of digital value transfer infrastructures. Using digital value transfer infrastructures as a central case study, this thesis is concerned with unpacking the negotiation processes that shape the governance, design and political purposes of digital infrastructures that are closely linked to the public interest and state sovereignty. In particular, the papers that are assembled in this manuscript identify and inspect three main socio-technical developments occurring in the domain of value transfer technologies: a) the privatization and platformization of digital payment infrastructures; b) the spread of blockchain-based digital value transfer infrastructures; c) the construction of digital value transfer infrastructures as public utilities, from the part of public institutions or organizations. Concerned with the relationship between law, discourse and technological development, the thesis explores four transversal issues that strike differences and peculiarities of these three scenarios: i) privacy; ii) the synergy and mutual influence of legal change and technological development in the construction of digital infrastructures; iii) the role of socio-technical imaginaries in policy-making concerned with digital infrastructures; iv) the geography and scale of digital infrastructures. The analyses lead to the argument that, in the co-development of legal systems and digital infrastructures that are core to public life, conflicts are productive. Negotiations, ruptures and exceptions are constitutive of the unending process of mutual reinforcement, and mutual containment, in which a plurality of agencies – expressed through legal institutions, symbolic systems, as well as information and media structures - are entangled.

Bibtex

PhD Thesis{nokey, title = {Money after money: Disassembling value/information infrastructures}, author = {Ferrari, V.}, url = {https://dare.uva.nl/search?identifier=30904422-2233-4400-bc5f-e7971b33f758}, year = {2023}, date = {2023-04-21}, abstract = {This manuscript is a journey through coexisting, emerging or speculated about, types of digital value transfer infrastructures. Using digital value transfer infrastructures as a central case study, this thesis is concerned with unpacking the negotiation processes that shape the governance, design and political purposes of digital infrastructures that are closely linked to the public interest and state sovereignty. In particular, the papers that are assembled in this manuscript identify and inspect three main socio-technical developments occurring in the domain of value transfer technologies: a) the privatization and platformization of digital payment infrastructures; b) the spread of blockchain-based digital value transfer infrastructures; c) the construction of digital value transfer infrastructures as public utilities, from the part of public institutions or organizations. Concerned with the relationship between law, discourse and technological development, the thesis explores four transversal issues that strike differences and peculiarities of these three scenarios: i) privacy; ii) the synergy and mutual influence of legal change and technological development in the construction of digital infrastructures; iii) the role of socio-technical imaginaries in policy-making concerned with digital infrastructures; iv) the geography and scale of digital infrastructures. The analyses lead to the argument that, in the co-development of legal systems and digital infrastructures that are core to public life, conflicts are productive. Negotiations, ruptures and exceptions are constitutive of the unending process of mutual reinforcement, and mutual containment, in which a plurality of agencies – expressed through legal institutions, symbolic systems, as well as information and media structures - are entangled.}, }