On the Robustness of Statistical Models: Entropy-based Regularisation and Sensitivity of Boolean Deep Neural Networks
Doktorsavhandling, 2023

Models like deep neural networks are known to be sensitive towards many different aspects of noise. Unfortunately, due to the black-box nature of these models, it is in general not known why this is the case. Here, we analyse and attack these problems from three different perspectives. The first one (Paper I) is when noise is present in training labels. Here we introduce a regularisation scheme that accurately identifies wrongly annotated labels and sometimes trains the model as if the noise were not present. The second perspective (Paper II) studies the effect of regularisation in order to reduce variance in the estimation. Due to the bias-variance trade-off, it is a hard task to find the appropriate regularisation penalty and strength. Here we introduce a methodology to reduce bias from a general regularisation penalty to make the estimation closer to the true value. In the final perspective (Paper III), we study the sensitivity that deep neural networks tend to have with respect to noise in their inputs, in particular, how these behaviours depend on the model architecture. These behaviours are studied within the framework of noise sensitivity and noise stability of Boolean functions.

Deep neural networks

Noise stability

Noisy labels

Noise sensitivity

Boolean functions

Regularisation

Euler, Chalmers tvärgata 3
Opponent: Professor emeritus Timo Koski, avd matematik statistik, Kungliga tekniska högskolan, Stockholm, Sverige

Författare

Olof Zetterqvist

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Zetterqvist, O., Jörnsten, R., Jonasson, J. Regularisation via observation weighting for robust classification in the presence of noisy labels.

Zetterqvist, O., Jonasson, J. Entropy weighted regularisation, a general way to debias regularisation penalties.

Jonasson, J., Steif, J, Zetterqvist, O. Noise Sensitivity and Stability of Deep Neural Networks for Binary Classification.

In recent years, much attention has been focused on machine learning and artificial intelligence, mainly due to the development and the many success stories of deep neural networks (DNNs). DNNs have been particularly prominent in applications such as image and text analysis, where models such as ChatGPT and CLIP have received much attention. However, as the use of DNNs grows, the need for a better understanding of how they work also grows. Unfortunately, much is still unknown, making us consider them as black-box models. One aspect that needs further clarification is why DNNs tend to be so sensitive towards noise. In this thesis, we study sensitivity from different perspectives. First, we study the sensitivity concerning noise in training labels. Here we present an algorithm that identifies errors and adapts the training accordingly. Secondly, we study how one can reduce bias as a biproduct of different regularisation penalties. Finally, we look at the asymptotic properties of DNNs regarding noise in the input domain. This is studied from the perspective of sensitivity and stability of Boolean functions.

Fundament

Grundläggande vetenskaper

Ämneskategorier

Kommunikationssystem

Sannolikhetsteori och statistik

Datorsystem

ISBN

978-91-7905-897-5

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5363

Utgivare

Chalmers

Euler, Chalmers tvärgata 3

Opponent: Professor emeritus Timo Koski, avd matematik statistik, Kungliga tekniska högskolan, Stockholm, Sverige

Mer information

Senast uppdaterat

2023-08-07