The Future of AI: Ethical, Legal, and Societal Issues

28 January - 1 February 2019

Venue: Lorentz Center@Snellius

If you are invited or already registered for this workshop, you have received login details by email.

Aim & Description

In the near future, we will all experience Artificial Intelligence (AI) applications making decisions and acting in a wide range of domains, including transportation, finance, health-care, education, public safety and security, entertainment, manufacturing, and others. AI applications, robots, and autonomous systems will act as autonomous actors that can make independent decisions of which the justification is often hard or impossible to understand. The problem of developing AI applications has always been a multidisciplinary activity involving disciplines such as AI itself but also disciplines such as cognitive science and linguistics, but the greater autonomy of AI systems and the greater impact of their decisions also mean that disciplines such as law and ethics need to be involved. But involving all of these disciplines runs against the problem that the conceptual frameworks in which the requirements and impact of AI applications are described, differ widely across these disciplines, and that the sophistication at which researchers and designers think about these impacts differs widely too. There is a large risk that this leads to designs with an impact that have not been foreseen nor intended by anyone, and that are undesirable from an ethical or a legal point of view.

The goal of this workshop is to create a multidisciplinary research agenda to bridge the conceptual gap among disciplines involved in developing AI applications, and to raise the level of sophistication at which the designers reason about the impact of these applications. Given the nature of AI systems, this immediately raises the question whether AI systems themselves can reason about these legal and ethical impacts, and, in turn, what the legal or ethical impact of that is. This leads us to the following questions to be discussed in the workshop.

* What are the legal or ethical impacts of AI systems?

* Can we define a unified conceptual framework to reason about these impacts, usable across all disciplines involved in designing AI systems?

* Can we build legally-aware or ethically-aware agents that use such a framework?

* What are the  legal or ethical impacts of AI systems that themselves make legal or ethical decisions?

These questions are also the topics of each of the first four days of the workshop. On Friday, we close the workshop with a public session where the Future of AI will be discussed with policy-makers and industry, in the form of a panel.

The outcome of this workshop should not only be a multidisciplinary research agenda, but can also be of help to policy makers. The ethical, legal and societal (ELS) issues discussed in the workshop are important in many concrete contexts. Applications of deep learning may be prone to bias that is hard to identify but may have undesirable impact; exploitation of personal data impacts privacy and autonomy of citizens; AI applications may put vulnerable people at risk; autonomous weapons may make life-or-death decisions; robots truly capable of ethical reasoning may themselves need legal protection; accountability for important decisions may be dissipated.

Read more...


Follow us on:

Niels Bohrweg 1 & 2

2333 CA Leiden

The Netherlands

+31 71 527 5400