Tutorial Program

8:00am Welcome & Introduction: The need for trustworthy AI in medical imaging (and beyond)
Marco Lorenzi - INRIA
8:15am Privacy-preserving AI for medical imaging
Prof. Daniel Rückert - Technical University Munich
Artificial intelligence (AI) methods have the potential to revolutionize the domain of medicine, for example, in medical imaging, where the application of advanced machine learning techniques, in particular, deep learning, has achieved remarkable success. However, the broad application of AI techniques in medicine is currently hindered by limited dataset availability for algorithm training and validation, partly due to legal and ethical requirements to protect patient privacy. Here, we present an overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond.
9:00am Technical Robustness and Bias in Medical Imaging
Maria A. Zuluaga - EURECOM
Technical robustness refers to the capacity of AI systems to be adverse to risks and to behave reliably, minimising and preventing unintentional and unexpected harm. This talk will cover three of the key points that robust AI systems should address: 1) safety and fall-back plans; 2) accuracy; and 3) reliability and reproducibility. We will discuss to which extent these are being addressed in current medical imaging applications, and conclude with an overview of the effect of bias on a system’s robustness, and related mitigation strategies.
9:30am Coffee Break
10:00am FUTURE-AI: International guidelines and consensus recommendations for trustworthy AI
Dr. Karim Lekadir - University of Barcelona
Despite major advances in AI for medical imaging, the deployment and deployment of AI technologies remain limited compared to the research output in the field. Over the recent years, concerns have been expressed on the potential risks, ethical implications and general lack of trust associated with emerging AI technologies in the real world. In particular, AI tools continue to be viewed as complex and opaque, prone to errors and biases, and potentially unsafe or unethical for patients. This talk will present FUTURE-AI (www.future-ai.eu), a code of practice recently defined by an international consortium of over 80 experts in the field, to ensure future medical AI tools are developed to be trusted and accepted by patients, health professionals, health organisations and authorities. In particular, the guidelines recommend building AI solutions that are Fair, Universal, Traceable, Usable, Robust and Explainable (FUTURE-AI) and offer concrete recommendations that cover the whole AI production lifecycle, from AI design and development to AI validation and operation. A concrete example will be presented in the field of treatment planning in breast cancer to illustrate the potential of the FUTURE-AI guidelines.
10:45am Cryptography for privacy-preserving AI : Challenges and solutions
Melek Önen - EURECOM
The goal of Privacy Preserving Machine Learning (PPML) is to identify customized algorithms that would, by design, preserve the privacy of the processed data. Fully homomorphic encryption or secure multi-party computation are popular cryptographic techniques for PPML. Yet, these often incur high computational and/or communication costs. In this talk, we will analyse the tension between ML techniques and relevant cryptographic tools, and overview existing solutions addressing privacy requirements.
11:15am Closing Remarks
Marco Lorenzi - INRIA