Keyword: ai
The rise of technology courts, or: How technology companies re-invent adjudication for a digital world
Computer Law & Security Review, vol. 56, num: 106118, 2025
Abstract
The article “The Rise of Technology Courts” explores the evolving role of courts in the digital world, where technological advancements and artificial intelligence (AI) are transforming traditional adjudication processes. It argues that traditional courts are undergoing a significant transition due to digitization and the increasing influence of technology companies. The paper frames this transformation through the concept of the “sphere of the digital,” which explains how digital technology and AI redefine societal expectations of what courts should be and how they function.
The article highlights that technology is not only changing the materiality of courts—moving from physical buildings to digital portals—but also affecting their symbolic function as public institutions. It discusses the emergence of AI-powered judicial services, online dispute resolution (ODR), and technology-driven alternative adjudication bodies like the Meta Oversight Board. These developments challenge the traditional notions of judicial authority, jurisdiction, and legal expertise.
The paper concludes that while these technology-driven solutions offer increased efficiency and accessibility, they also raise fundamental questions about the legitimacy, transparency, and independence of adjudicatory bodies. As technology companies continue to shape digital justice, the article also argues that there are lessons to learn for the role and structure of traditional courts to ensure that human rights and public values are upheld.
Links
ai, big tech, digital transformation, digitisation, justice, values
Bibtex
Generative AI and Creative Commons Licences – The Application of Share Alike Obligations to Trained Models, Curated Datasets and AI Output external link
JIPITEC, vol. 15, iss. : 3, 2024
Abstract
This article maps the impact of Share Alike (SA) obligations and copyleft licensing on machine learning, AI training, and AI-generated content. It focuses on the SA component found in some of the Creative Commons (CC) licences, distilling its essential features and layering them onto machine learning and content generation workflows. Based on our analysis, there are three fundamental challenges related to the life cycle of these licences: tracing and establishing copyright-relevant uses during the development phase (training), the interplay of licensing conditions with copyright exceptions and the identification of copyright-protected traces in AI output. Significant problems can arise from several concepts in CC licensing agreements (‘adapted material’ and ‘technical modification’) that could serve as a basis for applying SA conditions to trained models, curated datasets and AI output that can be traced back to CC material used for training purposes. Seeking to transpose Share Alike and copyleft approaches to the world of generative AI, the CC community can only choose between two policy approaches. On the one hand, it can uphold the supremacy of copyright exceptions. In countries and regions that exempt machine-learning processes from the control of copyright holders, this approach leads to far-reaching freedom to use CC resources for AI training purposes. At the same time, it marginalises SA obligations. On the other hand, the CC community can use copyright strategically to extend SA obligations to AI training results and AI output. To achieve this goal, it is necessary to use rights reservation mechanisms, such as the opt-out system available in EU copyright law, and subject the use of CC material in AI training to SA conditions. Following this approach, a tailor-made licence solution can grant AI developers broad freedom to use CC works for training purposes. In exchange for the training permission, however, AI developers would have to accept the obligation to pass on – via a whole chain of contractual obligations – SA conditions to recipients of trained models and end users generating AI output.
ai, Copyright, creative commons, Licensing, machine learning
Bibtex
Prompts tussen vorm en inhoud: de eerste rechtspraak over generatieve AI en het werk download
Auteursrecht, iss. : 3, pp: 129-134, 2024
Abstract
Kan het gebruik van generatieve AI-systemen een auteursrechtelijk beschermd werk opleveren? Twee jaar na de introductie van Dall-E en ChatGPT begint zich enige jurisprudentie te vormen. Daarbij is de kernvraag of het aansturen van dergelijke systemen door middel van prompts (instructies) voldoende is om de output als ‘werk’
te kwalificeren. Dit artikel gaat, mede aan de hand van de vroegste rechtspraak in de Verenigde Staten, China en Europa, dieper in op deze lastige kwestie.
Links
ai, Copyright
Bibtex
Trademark Law, AI-driven Behavioral Advertising, and the Digital Services Act: Toward Source and Parameter Transparency for Consumers, Brand Owners and Competitors external link
Research Handbook on Intellectual Property and Artificial Intelligence, Edward Elgar Publishing, 2022, pp: 309-324, ISBN: 9781800881891
Abstract
In its Proposal for a Digital Services Act (“DSA”), the European Commission highlighted the need for new transparency obligations to arrive at accountable digital services, ensure a fair environment for economic operators and empower consumers. However, the proposed new rules seem to focus on transparency measures for consumers. According to the DSA Proposal, platforms, such as online marketplaces, must ensure that platform users receive information enabling them to understand when and on whose behalf an advertisement is displayed, and which parameters are used to direct advertising to them, including explanations of the logic underlying systems for targeted advertising. Statements addressing the interests of trademark owners and trademark policy are sought in vain. Against this background, the analysis sheds light on AI-driven behavioural advertising practices and the policy considerations underlying the proposed new transparency obligations. In the light of the debate on trademark protection in keyword advertising cases, it will show that not only consumers but also trademark owners have a legitimate interest in receiving information on the parameters that are used to target consumers. The discussion will lead to the insight that lessons from the keyword advertising debate can play an important role in the transparency discourse because they broaden the spectrum of policy rationales and guidelines for new transparency rules. In addition to the current focus on consumer empowerment, the enhancement of information on alternative offers in the marketplace and the strengthening of trust in AI-driven, personalized advertising enter the picture. On balance, there are good reasons to broaden the scope of the DSA initiative and ensure access to transparency information for consumers and trademark owners alike.
Links
ai, Trademark law
Bibtex
The commodification of trust external link
Blockchain & Society Policy Research Lab Research Nodes, num: 1, 2021
Abstract
Fundamental, wide-ranging, and highly consequential transformations take place in interpersonal, and systemic trust relations due to the rapid adoption of complex, planetary-scale digital technological innovations. Trust is remediated by planetary scale techno-social systems, which leads to the privatization of trust production in society, and the ultimate commodification of trust itself.
Modern societies rely on communal, public and private logics of trust production. Communal logics produce trust by the group for the group, and are based on familiar, ethnic, religious or tribal relations, professional associations epistemic or value communities, groups with shared location or shared past. Public trust logics developed in the context of the modern state, and produce trust as a free public service. Abstract, institutionalized frameworks, institutions, such as the press, or public education, science, various arms of the bureaucratic state create familiarity, control, and insurance in social, political, and economic relations. Finally, private trust producers sell confidence as a product: lawyers, accountants, credit rating agencies, insurers, but also commercial brands offer trust for a fee.
With the emergence of the internet and digitization, a new class of private trust producers emerged. Online reputation management services, distributed ledgers, and AI-based predictive systems are widely adopted technological infrastructures, which are designed to facilitate trust-necessitating social, economic interactions by controlling the past, the present and the future, respectively. These systems enjoy immense economic success, and they are adopted en masse by individuals and institutional actors alike.
The emergence of the private, technical means of trust production paves the way towards the widescale commodification of trust, where trust is produced as a commercial activity, conducted by private parties, for economic gain, often far removed from the loci where trust-necessitating social interactions take place. The remediation and consequent privatization and commodification of trust production has a number of potentially adverse social effects: it may decontextualize trust relationships; it removes trust from the local social, cultural relational contexts; it changes the calculus of interpersonal trust relations. Maybe more importantly as more and more social and economic relations are conditional upon having access to, and good standing in private trust infrastructures, commodification turns trust into the question of continuous labor, or devastating exclusion. By invoking Karl Polanyi’s work on fictious commodities, I argue that the privatization, and commodification of trust may have a catastrophic impact on the most fundamental layers of the social fabric.
ai, blockchains, commodification, frontpage, Informatierecht, Karl Polanyi, reputation, trust, trust production
Bibtex
News Recommenders and Cooperative Explainability: Confronting the contextual complexity in AI explanations external link
ai, frontpage, news recommenders, Technologie en recht
Bibtex
Netherlands/Research external link
1029, pp: 164-175
Abstract
How are AI-based systems being used by private companies and public authorities in Europe? The new report by AlgorithmWatch and Bertelsmann Stiftung sheds light on what role automated decision-making (ADM) systems play in our lives. As a result of the most comprehensive research on the issue conducted in Europe so far, the report covers the current use of and policy debates around ADM systems in 16 European countries and at EU level.
ai, automated decision making, frontpage, Technologie en recht
Bibtex
Discrimination, artificial intelligence, and algorithmic decision-making external link
vol. 2019, 2019
Abstract
This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers
credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools
could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with
protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.
Links
ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten
Bibtex
Automated Decision-Making Fairness in an AI-driven World: Public Perceptions, Hopes and Concerns external link
Araujo, T., Vreese, C.H. de, Helberger, N., Kruikemeier, S., Weert, J. van,, Bol, N., Oberski, D., Pechenizkiy, M., Schaap, G. & Taylor, L.
2018
Abstract
Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels.
The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands.
This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.
ai, algoritmes, Artificial intelligence, automated decision making, frontpage