Modelling Uncertainty in Deep Learning for Safer Medical Image Processing

Abstract

Deep learning is now ubiquitous in the field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, which lack a mechanism to represent and manipulate uncertainty about models and predictions. In the last five years, I have explored probabilistic modelling as a framework to integrate uncertainty information in deep learning models, and studied its utility in various medical imaging applications. In this talk, I would like to give an overview of my past research and the future outlook, focusing on the methods for modelling (i) predictive, (ii) structural, and (iii) human uncertainty. Firstly, I will discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. Secondly, I will talk about our recent attempt at learning useful connectivity structures within a CNN model for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Lastly, I will present our recent work on modelling the uncertainty of multiple human annotators, and thereby learning efficiently in the presence of large label noise and disagreement, a pervasive issue in medical imaging datasets in practice.

Date
Event
Research Seminar, Technische Universität München
Location
Munich, Germany

Invited by Daniel Rueckert