Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations download

Chicago-Kent Law Review (forthcoming), 2024

Abstract

This essay explores AI-assisted content creation in light of EU and U.S. copyright law. The essay revisits a 2020 study commissioned by the European Commission, which was written before the surge of generative AI. Drawing from traditional legal doctrines, such as the idea/expression dichotomy and its equivalents in Europe, the author argues that iterative prompting may lead to copyright protection of GenAI-assisted output. The paper critiques recent U.S. Copyright Office guidelines that severely restrict registration of works created with the aid of GenAI. Human input, particularly in the conceptual and redaction phases, provides sufficient creative control to justify copyright protection of many AI-assisted works. With many of the expressive features being machine-generated, the scope of copyright protection of such works should, however, remain fairly narrow.

Artificial intelligence, artistic expression, Copyright

Bibtex

Article{nokey, title = {Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations}, author = {Hugenholtz, P.}, url = {https://www.ivir.nl/nl/publications/copyright-and-the-expression-engine-idea-and-expression-in-ai-assisted-creations/chicagokentlawreview2024/}, year = {2024}, date = {2024-11-05}, journal = {Chicago-Kent Law Review (forthcoming)}, abstract = {This essay explores AI-assisted content creation in light of EU and U.S. copyright law. The essay revisits a 2020 study commissioned by the European Commission, which was written before the surge of generative AI. Drawing from traditional legal doctrines, such as the idea/expression dichotomy and its equivalents in Europe, the author argues that iterative prompting may lead to copyright protection of GenAI-assisted output. The paper critiques recent U.S. Copyright Office guidelines that severely restrict registration of works created with the aid of GenAI. Human input, particularly in the conceptual and redaction phases, provides sufficient creative control to justify copyright protection of many AI-assisted works. With many of the expressive features being machine-generated, the scope of copyright protection of such works should, however, remain fairly narrow.}, keywords = {Artificial intelligence, artistic expression, Copyright}, }

De Grondwet en Artifciële Intelligentie external link

De Grondwet en nieuwe technologie: klaar voor de toekomst?: Twaalf pleidooien voor modernisering van de Grondwet, Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, 2024, pp: 69-83

Artificial intelligence, Fundamental rights

Bibtex

Chapter{nokey, title = {De Grondwet en Artifciële Intelligentie}, author = {Dommering, E.}, url = {https://open.overheid.nl/documenten/9172451e-8e06-43a7-aed5-12f9a5233c3f/file}, year = {2024}, date = {2024-08-01}, keywords = {Artificial intelligence, Fundamental rights}, }

Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke external link

Kluwer Copyright Blog, 2024

Artificial intelligence, Germany, text and data mining

Bibtex

Online publication{nokey, title = {Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke}, author = {Keller, P.}, url = {https://copyrightblog.kluweriplaw.com/2024/07/22/machine-readable-or-not-notes-on-the-hearing-in-laion-e-v-vs-kneschke/}, year = {2024}, date = {2024-07-22}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Germany, text and data mining}, }

How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

Bibtex

Article{nokey, title = {How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {2024}, date = {2024-01-23}, journal = {Berkeley Technology Law Journal}, volume = {38}, issue = {3}, pages = {933-1010}, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content}, }

EU copyright law round up – fourth trimester of 2023 external link

Trapova, A. & Quintais, J.
Kluwer Copyright Blog, 2024

Artificial intelligence, Copyright, EU

Bibtex

Online publication{nokey, title = {EU copyright law round up – fourth trimester of 2023}, author = {Trapova, A. and Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2024/01/04/eu-copyright-law-round-up-fourth-trimester-of-2023/}, year = {2024}, date = {2024-01-04}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Copyright, EU}, }

Artificiële Intelligentie: waar is de werkelijkheid gebleven? download

Computerrecht, iss. : 6, num: 258, pp: 476-483, 2023

Abstract

Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.

Artificial intelligence

Bibtex

Article{nokey, title = {Artificiële Intelligentie: waar is de werkelijkheid gebleven?}, author = {Dommering, E.}, url = {https://www.ivir.nl/nl/publications/artificiele-intelligentie-waar-is-de-werkelijkheid-gebleven/ai-computerrecht-2023/}, year = {2023}, date = {2023-12-05}, journal = {Computerrecht}, issue = {6}, number = {258}, abstract = {Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.}, keywords = {Artificial intelligence}, }

An Interdisciplinary Toolbox for Researching the AI-Act external link

Verfassungsblog, 2023

Artificial intelligence

Bibtex

Online publication{nokey, title = {An Interdisciplinary Toolbox for Researching the AI-Act}, author = {Metikoš, L.}, url = {https://verfassungsblog.de/an-interdisciplinary-toolbox-for-researching-the-ai-act/}, doi = {https://doi.org/10.17176/20230908-062850-0}, year = {2023}, date = {2023-09-08}, journal = {Verfassungsblog}, keywords = {Artificial intelligence}, }

Generative AI, Copyright and the AI Act external link

Kluwer Copyright Blog, 2023

Abstract

Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.

Artificial intelligence, Copyright

Bibtex

Online publication{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2023/05/09/generative-ai-copyright-and-the-ai-act/}, year = {2023}, date = {2023-05-09}, journal = {Kluwer Copyright Blog}, abstract = {Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.}, keywords = {Artificial intelligence, Copyright}, }

A Primer and FAQ on Copyright Law and Generative AI for News Media external link

Quintais, J. & Diakopoulos, N.
2023

Artificial intelligence, Copyright, Media law, news

Bibtex

Online publication{nokey, title = {A Primer and FAQ on Copyright Law and Generative AI for News Media}, author = {Quintais, J. and Diakopoulos, N.}, url = {https://generative-ai-newsroom.com/a-primer-and-faq-on-copyright-law-and-generative-ai-for-news-media-f1349f514883}, year = {2023}, date = {2023-04-26}, keywords = {Artificial intelligence, Copyright, Media law, news}, }

Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act external link

Senftleben, M., Quintais, J. & Meiring, A.

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content

Bibtex

Online publication{nokey, title = {Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content}, }