Regularised Weights in Statistical Models
Licentiatavhandling, 2021
To reduce the risk of overfitting, a common approach is to manipulate the loss function. Either by adding a penalty on the model's flexibility, which reduces variance with a cost of an increased bias, or weight the loss contribution from different data points in order to reduce the influence of harmful data.
This thesis introduces a self-maintained method to reweigh different components (observations and/or parameter regularisation) in the loss function during training. With some care with the choice of model, these weights can be solved for, leading in the end to only a modification in the loss function. Due to this, the resulting method can easily be combined with other regularisation techniques.
Using the weighting technique on observations in a setting with mislabeled data produces more robust training than an unweighted model and detects mislabeled examples in data.
When used on the regularisation penalty, the weights reduces bias introduces by the regularisation term while keeping some crucial attributes from the original penalty.
Lasso
Robust Statistics
Deep Learning
Weighted loss.
Noisy Labels
Neural Networks
Regularisation
Författare
Olof Zetterqvist
Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik
Zetterqvist, O., Jörnsten, R., Jonasson, J. Robust Neural Network Classification via Double Regularization.
Zetterqvist, O., Jonasson, J. Entropy weighted regularisation, a general way to debias regularisation penalties.
Infrastruktur
C3SE (Chalmers Centre for Computational Science and Engineering)
Ämneskategorier
Sannolikhetsteori och statistik
Utgivare
Chalmers
Pascal (Zoom: https://chalmers.zoom.us/j/63940246794 Password: 199493)
Opponent: Jonas Wallin, Department of Statistics, Lund University