The hype around the promises of so-called Big Data has, at the same time, generated a need for more realistic and critical analyses of the opportunities and risks associated with the widespread use of data analytics within society. It has become widely acknowledged that the ethical dimensions and social consequences of data go beyond just the collection and management of data. The spread of profiling, prediction and algorithmic decision making across societal domains (commerce, insurance, access to social support, job hiring, law enforcement, etc) hard-codes existing systems of power and resulting privileges and disadvantages for individuals into our social processes (O’Neil, 2016). Moreover, these systems are often opaque and inaccessible to those whose lives they ultimately impact (Pasquale, 2015).
Algorithmic discrimination is almost always an unintentional emergent property of the algorithm’s use rather than a conscious design choice, and is therefore hard to identify (Barocas & Selbst, 2016). To know how to ensure fairness in algorithmic decision making, we need a deeper understanding of the interplay between data, algorithms, interpretation, and decision to act. How trustworthy are algorithmic identities? What is the power relation between data analysts and data subjects? How are existing power structures reproduced through algorithms? To which extent do algorithmic, procedural or normative solutions exist, and if so: are they appropriate and when? Addressing such questions requires thorough reflection on the social context of the identities of both data professionals and data subjects and the ways in which implicit and explicit power structures are encoded into algorithmic decisions.
To date, however, much work in this domain has tended to focus on narrow facets of individual identities, considering issues of algorithmic discrimination according to singular, established protected categories like race or ethnicity or gender. To further our collective understanding of issues in data analytics, algorithmic discrimination, and individual identity, this workshop will explicitly adopt an intersectional lens. The concept of intersectionality refers to `` the interaction of multiple identities and experiences of exclusion and subordination’’ (Davis, 2008). It was concretized in the work of American legal scholar Kimberlé Crenshaw (1989), whose work on antidiscrimination effectively critiqued the law’s inability to recognize discrimination along multiple axes—for example, the law could address either gender or race, but not both simultaneously.
Overall, intersectionality’s focus on moving beyond single-issue or single-category analyses of discrimination and oppression speaks directly to our current moment in the critical study of data and algorithms. Although the humanities and social sciences have been part of the algorithmic discrimination conversation, the specific insights and articulations of intersectionality theory have not received due attention. The current debates on algorithmic discrimination demonstrate the urgency to bring together the algorithmic discrimination and intersectionality communities to think about intersectionality and algorithmic discrimination in one go.
This workshop aims to bring together the algorithmic discrimination and intersectionality communities to think about intersectionality and algorithmic discrimination in one go. We will address the complexity of the intersecting power systems in society, how these are encoded in algorithmic decisions and explore ways to deal with this. We recognize that this requires translation of insights and discussions between disciplines that have different cultures in themselves: computer science, artificial intelligence, sociology, media studies, science and technology studies, law and ethics. As the goal to bridge across disciplines is central to our workshop, we ask the participants to be constructive and open-minded about differences of methodology and academic culture between those disciplines.
At the workshop we will work towards a research strategy document for the workshop theme. We hope we can initiate academic publication(s) and proposals for special issues or edited volumes, follow up meetings and/or projects. Furthermore, we intend to build a solid, international and multidisciplinary community around the workshop theme. Through the workshop, we also aim to give the initiative of FATML (Fairness, Accountability and Transparency in Machine Learning) a stronger basis in Europe and the Netherlands in particular.