On 22-24 May 2024 the 17th international CPDP conference was held in Brussels.
Several of our researchers attended this conference and IViR researchers also moderated a few of the panels:
Right to Research: Responsible Access to Data
Moderator: Kristina Irion.
Speakers: Lori Roussey, Arman Noroozian, Claudine Tinsman & Jef Ausloos.
Scientific research hinges on the ability to observe the world around us. The digital transformation of life, work and society means that in order to be able to observe, researchers increasingly need access to data in and about digital infrastructures. Researcher access to data is not just necessary to carry out research about digital infrastructures and their impact on humans, our society and the environment. It is increasingly vital to study virtually any other phenomena that is digitally intermediated, whether it be engineering, medical, psychological, or sociological research. Yet, digital infrastructures – whether they are operated by public and private sector actors can be impenetrable fortresses, challenging academics’ and universities’ core mission as public interest-driven knowledge producers. Although EU digital and data legislation holds numerous data access and transparency provisions, they are rarely formulated with scientific research in mind. This panel presents key findings of a recent IViR study which maps ‘access to data for research’. Together with the panelists we will explore a number of issues that remain unresolved:
• What role is there for (EU) legislation in ensuring data access for scientific research?
• How should scientific research approach ethical and normative constraints to data access?
• What is the impact of data access and transparency rules for research on AI development?
Privacy and Surveillance in the Quantum Age: Developments in Quantum Sensing Technologies and their Implications
Moderator: Bengi Zeybek.
Speakers: Philippe Bouyer, Maša Galic, Chris Hoofnagle & Henning Soller.
In the quantum technology innovation landscape, quantum sensing technology (QST) gets relatively little attention. Compared to other quantum technologies, though, QSTs have a more advanced technology-readiness and wider possible application areas, ranging from defence, intelligence, space, biomedicine, mining, and environmental monitoring. QSTs may improve the performance of current advanced sensing systems and allow new applications, which could have significant societal implications. QSTs could play a crucial role in transforming the surveillance capabilities of state and non-state actors, such as of military, intelligence services, law enforcement and commercial entities. And with advanced computational capacity, these actors could wield enormous power in sensor data analysis. Most QSTs are currently being tested in lab environments, and it is hard to predict exactly how and where QSTs will be adopted. Still, developments in QSTs warrant an early exploration of policy implications and their possible effects on fundamental rights, privacy and data protection, in particular.
• What are the key properties of quantum sensing technologies and what are their potential applications? What opportunities do they bring and what are the challenges for deploying them outside of the lab?
• What are some of the notable investment developments and who are the key actors in the quantum sensing space?
• What are some of the main privacy law and policy implications of quantum sensing, and how can law respond to the dual-use nature of certain quantum sensing applications and contribute to the proper balance between the different societal interests implicated by quantum sensing technologies?
• What are some of the current issues, from a privacy and surveillance studies perspective, of advanced sensing applications?
The Governance of Quantum Computing
Moderator: Joris van Hoboken.
Speakers: Matthias Troyer, Christian Schaffner, Aparna Surendra & Marieke Hood.
Quantum computing has attracted increased attention in the last years, with significant private and public investment going into the development of fault-tolerant quantum computers. While it is undisputed that achieving this goal would be a major scientific breakthrough for the 21st century, questions about the practical applications and benefits in relation to classical (super)computing remain. What applications of quantum computing should be anticipated in the current development phase of this new technology, and what governance questions and approaches would be suitable to steer the use of the technology towards the common good? In this panel, we will take stock of the state of play in quantum computing research and development and discuss some of the main governance challenges related to this new technology.
• What are realistic expectations with respect to the development of fault-tolerant quantum computers?
• What are the most promising use cases for quantum computing, besides Shor’s algorithm?
• What are possible and appropriate governance responses to address the risks and benefits of this new technology?
• How can we ensure equitable access to this technology globally?
The Role of Research and Researchers in AI Governance
Moderator: Natali Helberger
Speakers: Oana Goga, Matthias Spielkamp & Sven Schade
Because of the complexity of AI Governance and the need for expertise to understand the intrinsic technical, economic, societal and ethical implications of AI, researchers have a prominent role in the AI Act – as critical observers, independent advisors, alternative innovators, explainers, fact checkers and red flaggers. But writing the role of researchers into laws like the DSA or the AI Act is only the first step towards an informed and evidence-based governance framework. The next step still needs to be taken: to define what exactly the position of researchers is, what kind of affordances, rights and support they need from regulators and society to play that role, how it aligns with the way academia works, and how to make sure that the insights from research and academia reach policy makers and regulators.
- What are the expectations of laws and policy makers for the role of researchers under the AI Act and the DSA?
- How to position researchers vis-à-vis policy makers and regulators, and when does the role of academics as extended arms of regulators conflict with academic independence?
- What kind of rights or affordances do we need for academics to be able to fullfil that role of ‘critical friend’ that the laws asign to them?
- More and more of research into AI safety and responsible use takes place in AI companies. How to create a healthy research ecosystem?
Approaches to DSA Data Access
Moderator: Emilia Gómez
Speakers: Kathy Messmer, Paddy Leerssen, Kirsty Park, Claudia Canelles Quaroni & Veronique Cimina.
For a long time, the opacity of algorithmic systems was a barrier for those who sought to scrutinize them. Studies of online platforms often depended on voluntary cooperation of the providers of those platforms. The Digital Services Act (DSA) changes this situation. Its Article 40 sets out how certain researchers can access certain data to study systemic risks and the effectiveness of mitigation measures. To obtain access, researchers must demonstrate that they can fulfill the data security and confidentiality requirements corresponding to each request and to protect personal data. Their request must describe the appropriate technical and organizational measures they have put in place. In this session, we invite a panel of experts and the audience to discuss possible approaches researchers can take to meet those conditions, as set out in Article 40(8) DSA.
- How can researchers meet the relevant conditions, in particular concerning the protection of personal data?
- Which existing procedures, tools, infrastructures can be useful in this regard?
- Which kinds of expertise will be needed? How can it be included?
- How can legal and technical experts work effectively together to prepare successful application?
Decentralizing AI Fairness Decisions
Moderator: Jurriaan Parie.
Speakers: Laurens Naudts, David Nolan, Karolina Iwańska & Sofia Ranchordas.
Widespread AI systems, such as machine learning-based profiling and computer vision algorithms, lack established fairness methodologies. With the advent of the AI Act, regulators rely on self-control mechanisms to evaluate AI systems’ compliance with fundamental rights. But entrusting decentralized entities, e.g., data science teams, with identifying and resolving value tensions raises concerns. In practice, one soon runs into difficulties when trying to validate an algorithm. Such as selecting appropriate metrics to measure fairness in data and algorithms. How can normative issues regarding open legal norms relating to proxy-discrimination and explainability be resolved? This panel explores how decentralized AI audits can be performed in a more transparent and inclusive manner with the help of the concept of “algoprudence” (jurisprudence for algorithms). Additionally, the panel discusses how institutional entities can actively guide AI developers to comply with, for example, existing non-discrimination regulations.
- From the perspective of both individual and institutional legal protection, what are the implications of decentralizing decisions regarding fundamental rights, and what issues might it resolve or introduce?
- How can normative disputes be settled when performing Fundamental Rights Impact Assessments (FRIAs) in AI development?
- What is the role of regulatory bodies in providing guidance for resolving normative challenges regarding AI fairness?
- What is “algoprudence” and how can it contribute to more fair AI decisions?