Skip to yearly menu bar Skip to main content


Poster

Self-Calibrating Vicinal Risk Minimisation for Model Calibration

Jiawei Liu · Changkun Ye · Ruikai Cui · Nick Barnes

Arch 4A-E Poster #308
[ ] [ Paper PDF ]
[ Poster
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Model calibration, measuring the alignment between the prediction accuracy and model confidence, is an important metric reflecting model trustworthiness. Existing dense binary classification methods, without proper regularisation of model confidence, are prone to being over-confident. To calibrate Deep Neural Networks (DNNs), we propose a Self-Calibrating Vicinal Risk Minimisation (SCVRM) that explores the vicinity space of labeled data, where vicinal images that are farther away from labeled images adopt the groundtruth label with decreasing label confidence. We prove that in the logistic regression problem, SCVRM can be seen as a Vicinal Risk Minimisation plus a regularisation term that penalises the over-confident predictions. In practical implementation, SCVRM is approximated with Monte Carlo sampling that samples additional augmented training images from the vicinal distributions. Experimental results demonstrate that SCVRM can significantly enhance model calibration for different dense classification tasks on both in-distribution and out-of-distribution data. Code will be released.

Chat is not available.