There is a rapid and irreversible shift towards incorporating artificially intelligent (AI) diagnostic and decisionmaking systems into medical practice. This development is driven with an eye towards their potential benefits: deep neural networks trained to interpret ultra and MRI scans, for instance, have been shown to outperform human radiologists and pathologists in identifying certain diseases. These systems could, therefore, lead to massive improvements in medical practice, allowing for more accurate diagnosis in less time and at lower cost, thereby freeing time and resources that professionals can spend with their patients.
The black box nature of those systems, however, makes it hard, if not impossible, to explain how these systems arrive at their diagnosis. This raises serious legal and moral concerns: a lack of suitable explanations means these systems are in violation of the GDPR that establishes patients’ “right to an explanation”. These concerns have spawned a growing literature within AI on methods that aim to construct explanations of such systems.
This workshop addresses the formal, ethical, and epistemological, questions that this development calls for, such as: Are these new formal methods successful in offering suitable explanations? What does explanation mean in this context, and what functions does it fulfill? Should a right to explanation really exist in the context of medicine, and how should we strike the balance with possibly improved accuracy? What is in fact the current contribution of AI to medicine, and how vital is it?
In discussing these and related questions, our workshop will merge the normative elements regarding explanation with the epistemological and formal elements so that we can establish a better understanding of future medical AI, and hopefully improve it.
12 - 14 April 2021
09:30 | 17:00 | Latest program is available under workshop files |