Practical fundamental rights impact assessments
Abstract
The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential infringements on rights are proportionate; and (iv) adopt organizational and/or technical measures mitigating risks identified. We further indicate how regulators might support these processes with practical guidance.
Links
Bibtex
Article{nokey,
title = {Practical fundamental rights impact assessments},
author = {Janssen, H. and Seng Ah Lee, M. and Singh, J.},
doi = {https://doi.org/10.1093/ijlit/eaac018},
year = {2022},
date = {2022-11-21},
journal = {International Journal of Law and Information},
volume = {30},
issue = {2},
pages = {200-232},
abstract = {The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential infringements on rights are proportionate; and (iv) adopt organizational and/or technical measures mitigating risks identified. We further indicate how regulators might support these processes with practical guidance.},
}