How To Teach Machines When Experts Disagree

Abstract

The predictive performance of supervised learning algorithms depends on the quality of labels. In a typical label collection process, multiple human annotators provide subjective noisy estimates of the “truth” under the influence of their varying skill-levels and biases. Blindly treating these noisy labels as the ground truth limits the accuracy of learning algorithms in the presence of strong disagreement. This problem is critical for applications in domains such as medical imaging where both the annotation cost and inter-observer variability are high. In this talk, I will be discussing my recent works that aim to disentangle, from noisy observations alone, the annotation noise of the individual human experts and the underlying true label distributions, in classification and structured prediction tasks. The key component of the approach is the combination of a model that describes the uncertainty associated with the human annotation process and a regularisation term that enables the fitting of such model. I will also discuss their relations to other existing noise-robust approaches and some open challenges.

Date
Event
VoxelTalk, MIT, CSAIL
Location
Massachusetts, US

Invited by Adrian Dalca