Fairness in Algorithmic Decision Making:A Domain-Specific Approach

Hybrid

21 - 25 March 2022

Venue: Lorentz Center@Snellius

If you are invited or already registered for this workshop, you have received login details by email.

Algorithmic decision-making powered by modern machine learning and artificial intelligence (AI) can improve our society in many ways, but can also have discriminatory effects. For instance, the state can use AI to detect welfare fraud, and insurers can use AI to predict risks. AI decision-making presents our society with serious challenges, which can be divided into two broad categories:

(i)  AI can lead to harm to, or discrimination of, people with a certain ethnicity, gender, or another characteristic protected by non-discrimination law. However, such AI- driven discrimination can remain hidden, among other reasons because many AI systems are opaque.

(ii)  AI can be unfair in other ways. For example, AI-driven differentiation could reinforce socio-economic inequality, or AI could incorrectly predict that somebody will not be able to repay a mortgage loan.

While many aspects of (un)fairness cut across domains, others are specific to certain domains. Also, re-designing algorithmic systems always requires a careful analysis not only of abstract goals, but also of the specific needs of the direct and indirect stakeholders, as well as of the knowledge, methods, rules, infrastructure, etc., in the specific domain. To understand and process these domain specifics, interdisciplinary collaboration is key. Therefore, in this workshop, unlike in much other work especially in AI Fairness, we do not focus on metrics and methods that are largely understood to be context-free. Instead, our starting point are concrete domains and case studies, a commitment to interdisciplinary collaboration from the outset, with participation of different stakeholders.

We will discuss these 3 specific application areas:
- fairness in recruitment
- fraud detection in the welfare state
- fairness in banking and insurance

Through these use cases, we aim to develop a common language and a better understanding and mutual learning on fairness challenges due to the use of AI in these sectors, to ideally derive a set of common new insights or questions around the issue. The main research questions to be addressed in the workshop are:

- Can different disciplines agree on requirements for fairness in AI-driven decision-making, at least with respect to concrete domains?
- How to bridge definitions of fairness across disciplines and how to avoid the ‘fairness traps’ as a result of different interpretations of fairness across the technical and social sciences? How to deal with different conceptions of fairness that arise even within one scientific area?
- How do such bridged definitions play out in different sectors and what can we learn from case studies of trying to mitigate bias through AI in these sectors?
- How can we draw up a better common language and set of tools around fairness in-and of AI in order to better anticipate and/or mitigate unfair treatment due to AI-based digital services or systems. And how can we better flag, test and audit such systems?
- Can we identify the main challenges of developing and putting fairness-compliant AI into practice and propose a roadmap?

 

Read more...


Follow us on:

Niels Bohrweg 1 & 2

2333 CA Leiden

The Netherlands

+31 71 527 5400