Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands external link

JIPITEC, vol. 12, num: 4, pp: 257-271, 2021

Abstract

This article discusses the use of automated decisioning-making (ADM) systems by public administrative bodies, particularly systems designed to combat social-welfare fraud, from a European fundamental rights law perspective. The article begins by outlining the emerging fundamental rights issues in relation to ADM systems used by public administrative bodies. Building upon this, the article critically analyses a recent landmark judgment from the Netherlands and uses this as a case study for discussion of the application of fundamental rights law to ADM systems by public authorities more generally. In the so-called SyRI judgment, the District Court of The Hague held that a controversial automated welfare-fraud detection system (SyRI), which allows the linking and analysing of data from an array of government agencies to generate fraud-risk reports on people, violated the right to private life, guaranteed under Article 8 of the European Convention on Human Rights (ECHR). The Court held that SyRI was insufficiently transparent, and contained insufficient safeguards, to protect the right to privacy, in violation of Article 8 ECHR. This was one of the first times an ADM system being used by welfare authorities has been halted on the basis of Article 8 ECHR. The article critically analyses the SyRI judgment from a fundamental rights perspective, including by examining how the Court brought principles contained in the General Data Protection Regulation within the rubric of Article 8 ECHR as well as the importance the Court attaches to the principle of transparency under Article 8 ECHR. Finally, the article discusses how the Dutch government responded to the judgment. and discusses proposed new legislation, which is arguably more invasive, with the article concluding with some lessons that can be drawn for the broader policy and legal debate on ADM systems used by public authorities. implications.

automated decision making, frontpage, fundamentele rechten, Grondrechten, Mensenrechten, nederland, SyRI-wetgeving

Bibtex

Article{nokey, title = {Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands}, author = {Appelman, N. and Fahy, R. and van Hoboken, J.}, url = {https://www.ivir.nl/publicaties/download/jipitec_2021_4.pdf https://www.jipitec.eu/issues/jipitec-12-4-2021/5407}, year = {1216}, date = {2021-12-16}, journal = {JIPITEC}, volume = {12}, number = {4}, pages = {257-271}, abstract = {This article discusses the use of automated decisioning-making (ADM) systems by public administrative bodies, particularly systems designed to combat social-welfare fraud, from a European fundamental rights law perspective. The article begins by outlining the emerging fundamental rights issues in relation to ADM systems used by public administrative bodies. Building upon this, the article critically analyses a recent landmark judgment from the Netherlands and uses this as a case study for discussion of the application of fundamental rights law to ADM systems by public authorities more generally. In the so-called SyRI judgment, the District Court of The Hague held that a controversial automated welfare-fraud detection system (SyRI), which allows the linking and analysing of data from an array of government agencies to generate fraud-risk reports on people, violated the right to private life, guaranteed under Article 8 of the European Convention on Human Rights (ECHR). The Court held that SyRI was insufficiently transparent, and contained insufficient safeguards, to protect the right to privacy, in violation of Article 8 ECHR. This was one of the first times an ADM system being used by welfare authorities has been halted on the basis of Article 8 ECHR. The article critically analyses the SyRI judgment from a fundamental rights perspective, including by examining how the Court brought principles contained in the General Data Protection Regulation within the rubric of Article 8 ECHR as well as the importance the Court attaches to the principle of transparency under Article 8 ECHR. Finally, the article discusses how the Dutch government responded to the judgment. and discusses proposed new legislation, which is arguably more invasive, with the article concluding with some lessons that can be drawn for the broader policy and legal debate on ADM systems used by public authorities. implications.}, keywords = {automated decision making, frontpage, fundamentele rechten, Grondrechten, Mensenrechten, nederland, SyRI-wetgeving}, }

Governing “European values” inside data flows: : interdisciplinary perspectives external link

Irion, K., Kolk, A., Buri, M. & Milan, S.
Internet Policy Review, vol. 10, num: 3, 2021

Abstract

This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows. Protecting fundamental human rights and critical public interests that undergird European societies in a global digital ecosystem poses complex challenges, especially because the United States and China are leading in novel technologies. We envision a research agenda calling upon different disciplines to further identify and understand European values that can adequately perform under conditions of transnational data flows.

Artificial intelligence, Data flows, Data governance, Digital connectivity, European Union, European values, Human rights, Internet governance, Personal data protection, Public policy, Societal values

Bibtex

Article{Irion2021e, title = {Governing “European values” inside data flows: : interdisciplinary perspectives}, author = {Irion, K. and Kolk, A. and Buri, M. and Milan, S.}, url = {https://policyreview.info/european-values}, doi = {https://doi.org/10.14763/2021.3.1582}, year = {1011}, date = {2021-10-11}, journal = {Internet Policy Review}, volume = {10}, number = {3}, pages = {}, abstract = {This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows. Protecting fundamental human rights and critical public interests that undergird European societies in a global digital ecosystem poses complex challenges, especially because the United States and China are leading in novel technologies. We envision a research agenda calling upon different disciplines to further identify and understand European values that can adequately perform under conditions of transnational data flows.}, keywords = {Artificial intelligence, Data flows, Data governance, Digital connectivity, European Union, European values, Human rights, Internet governance, Personal data protection, Public policy, Societal values}, }

Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows external link

Big Data and Global Trade Law, Cambridge University Press, 2021

Abstract

Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.

Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law

Bibtex

Chapter{Irion2021bb, title = {Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows}, author = {Irion, K.}, url = {https://www.cambridge.org/core/books/big-data-and-global-trade-law/panta-rhei/B0E5D7851240E0D2F4562B3C6DFF3011}, doi = {https://doi.org/https://doi.org/10.1017/9781108919234.015}, year = {2021}, date = {2021-07-05}, abstract = {Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.}, keywords = {Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law}, }

Panta rhei: A European Perspective on Ensuring a High-Level of Protection of Digital Human Rights in a World in Which Everything Flows external link

Amsterdam Law School Research Paper No. 2020, num: 38, 2020

Artificial intelligence, data flow, EU law, Human rights, WTO law

Bibtex

Article{Irion2020d, title = {Panta rhei: A European Perspective on Ensuring a High-Level of Protection of Digital Human Rights in a World in Which Everything Flows}, author = {Irion, K.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3638864}, year = {2020}, date = {2020-11-30}, journal = {Amsterdam Law School Research Paper No. 2020}, number = {38}, keywords = {Artificial intelligence, data flow, EU law, Human rights, WTO law}, }

European Court of Human Rights rules that collateral website blocking violates freedom of expression

Journal of Intellectual Property Law & Practice, vol. 15, iss. : 10, pp: 774–775, 2020

Freedom of expression, Human rights

Bibtex

Article{nokey, title = {European Court of Human Rights rules that collateral website blocking violates freedom of expression}, author = {Izyumenko, E.}, doi = {https://doi.org/10.1093/jiplp/jpaa135}, year = {2020}, date = {2020-10-23}, journal = {Journal of Intellectual Property Law & Practice}, volume = {15}, issue = {10}, pages = {774–775}, keywords = {Freedom of expression, Human rights}, }

From Flexible Balancing Tool to Quasi-Constitutional Straitjacket – How the EU Cultivates the Constraining Function of the Three-Step Test external link

Abstract

In the international intellectual property (IP) arena, the so-called “three-step test” regulates the room for the adoption of limitations and exceptions (L&Es) to exclusive rights across different fields of IP. Given the openness of the individual test criteria, it is tempting for proponents of strong IP protection to strive for the fixation of the meaning of the three-step test at the constraining end of the spectrum of possible interpretations. As the three-step test lies at the core of legislative initiatives to balance exclusive rights and user freedoms, the cultivation of the test’s constraining function and the suppression of the test’s enabling function has the potential to transform the three-step test into a bulwark against limitations of IP protection. The EU is at the forefront of a constraining use and interpretation of the three-step test in the field of copyright law. The configuration of the legal framework in the EU is worrisome because it obliges judges to apply the three-step test as an additional control instrument. It is not sufficient that an individual use falls within the scope of a statutory copyright limitation that explicitly permits this type of use without prior authorization. In addition, judges applying the three-step test also examine whether the specific form of use at issue complies with each individual criterion of the three-step test. Hence, the test serves as an instrument to further restrict L&Es that have already been defined precisely in statutory law. Not surprisingly, decisions from courts in the EU have a tendency of shedding light on the constraining aspect of the three-step test and, therefore, reinforcing the hegemony of copyright holders in the IP arena. The hypothesis underlying the following examination, therefore, is that the EU approach to the three-step test is one-sided in the sense that it only demonstrates the potential of the test to set additional limits to L&Es. The analysis focuses on this transformation of a flexible international balancing tool into a powerful confirmation and fortification of IP protection. For this purpose, the two facets of the international three-step test – its enabling and constraining function – are explored before embarking on a discussion of case law that evolved under the one-sided EU approach. Analyzing repercussions on international lawmaking, it will become apparent that the EU approach already impacted the further development of international L&Es. Certain features of the Marrakesh Treaty clearly reflect the restrictive EU approach.

access to knowledge, Berne Convention, Copyright, EU law, frontpage, Human rights, limitations and exceptions, Marrakesh Treaty, rights of disabled persons, transformative use, TRIPS Agreement

Bibtex

Chapter{Senftleben2020b, title = {From Flexible Balancing Tool to Quasi-Constitutional Straitjacket – How the EU Cultivates the Constraining Function of the Three-Step Test}, author = {Senftleben, M.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3576019}, year = {0416}, date = {2020-04-16}, abstract = {In the international intellectual property (IP) arena, the so-called “three-step test” regulates the room for the adoption of limitations and exceptions (L&Es) to exclusive rights across different fields of IP. Given the openness of the individual test criteria, it is tempting for proponents of strong IP protection to strive for the fixation of the meaning of the three-step test at the constraining end of the spectrum of possible interpretations. As the three-step test lies at the core of legislative initiatives to balance exclusive rights and user freedoms, the cultivation of the test’s constraining function and the suppression of the test’s enabling function has the potential to transform the three-step test into a bulwark against limitations of IP protection. The EU is at the forefront of a constraining use and interpretation of the three-step test in the field of copyright law. The configuration of the legal framework in the EU is worrisome because it obliges judges to apply the three-step test as an additional control instrument. It is not sufficient that an individual use falls within the scope of a statutory copyright limitation that explicitly permits this type of use without prior authorization. In addition, judges applying the three-step test also examine whether the specific form of use at issue complies with each individual criterion of the three-step test. Hence, the test serves as an instrument to further restrict L&Es that have already been defined precisely in statutory law. Not surprisingly, decisions from courts in the EU have a tendency of shedding light on the constraining aspect of the three-step test and, therefore, reinforcing the hegemony of copyright holders in the IP arena. The hypothesis underlying the following examination, therefore, is that the EU approach to the three-step test is one-sided in the sense that it only demonstrates the potential of the test to set additional limits to L&Es. The analysis focuses on this transformation of a flexible international balancing tool into a powerful confirmation and fortification of IP protection. For this purpose, the two facets of the international three-step test – its enabling and constraining function – are explored before embarking on a discussion of case law that evolved under the one-sided EU approach. Analyzing repercussions on international lawmaking, it will become apparent that the EU approach already impacted the further development of international L&Es. Certain features of the Marrakesh Treaty clearly reflect the restrictive EU approach.}, keywords = {access to knowledge, Berne Convention, Copyright, EU law, frontpage, Human rights, limitations and exceptions, Marrakesh Treaty, rights of disabled persons, transformative use, TRIPS Agreement}, }

Prospective Policy Study on Artificial Intelligence and EU Trade Policy external link

Irion, K. & Williams, J.
2020

Abstract

Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.

Artificial intelligence, EU law, Human rights, Transparency, WTO law

Bibtex

Report{Irion2020b, title = {Prospective Policy Study on Artificial Intelligence and EU Trade Policy}, author = {Irion, K. and Williams, J.}, url = {https://www.ivir.nl/ivir_policy-paper_ai-study_online/https://www.ivir.nl/ivir_artificial-intelligence-and-eu-trade-policy-2/}, year = {2020}, date = {2020-01-21}, abstract = {Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.}, keywords = {Artificial intelligence, EU law, Human rights, Transparency, WTO law}, }

Discrimination, artificial intelligence, and algorithmic decision-making external link

vol. 2019, 2019

Abstract

This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.

ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten

Bibtex

Report{Borgesius2019, title = {Discrimination, artificial intelligence, and algorithmic decision-making}, author = {Zuiderveen Borgesius, F.}, url = {https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73}, year = {0208}, date = {2019-02-08}, volume = {2019}, pages = {}, abstract = {This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.}, keywords = {ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten}, }

Expert Opinion: Legal basis for multilateral exchange of information external link

van Eijk, N. & Ryngaert, C.M.J.
2018

Abstract

Appendix IV to CTIVD report no. 56 to the review report on the multilateral exchange of data on (alleged) jihadists by the AIVD

gegevensbescherming, Mensenrechten, Privacy, veiligheidsdiensten

Bibtex

Report{vanEijk2018f, title = {Expert Opinion: Legal basis for multilateral exchange of information}, author = {van Eijk, N. and Ryngaert, C.M.J.}, url = {https://www.ivir.nl/publicaties/download/Expert_opinion_CTIVD.pdf}, year = {0503}, date = {2018-05-03}, abstract = {Appendix IV to CTIVD report no. 56 to the review report on the multilateral exchange of data on (alleged) jihadists by the AIVD}, keywords = {gegevensbescherming, Mensenrechten, Privacy, veiligheidsdiensten}, }

Deskundigenbericht: Juridische grondslag multilaterale informatie-uitwisseling external link

van Eijk, N. & Ryngaert, C.M.J.
vol. 2018, 2018

frontpage, gegevensuitwisseling, Mensenrechten, Privacy, toezicht, veiligheidsdiensten

Bibtex

Article{vanEijk2018e, title = {Deskundigenbericht: Juridische grondslag multilaterale informatie-uitwisseling}, author = {van Eijk, N. and Ryngaert, C.M.J.}, url = {https://www.ivir.nl/publicaties/download/Deskundigenbericht.pdf}, year = {0403}, date = {2018-04-03}, volume = {2018}, pages = {}, keywords = {frontpage, gegevensuitwisseling, Mensenrechten, Privacy, toezicht, veiligheidsdiensten}, }