Slope and Generalization Properties of Neural Networks
Paper i proceeding, 2022

Neural networks are very successful tools in for example advanced classification. From a statistical point of view, fitting a neural network may be seen as a kind of regression, where we seek a function from the input space to a space of classification probabilities that follows the 'general' shape of the data, but avoids overfitting by avoiding memorization of individual data points. In statistics, this can be done by controlling the geometric complexity of the regression function. We propose to do something similar when fitting neural networks by controlling the slope of the network.After defining the slope and discussing some of its theoretical properties, we go on to show empirically in examples, using ReLU networks, that the distribution of the slope of a well-Trained neural network classifier is generally independent of the width of the layers in a fully connected network, and that the mean of the distribution only has a weak dependence on the model architecture in general. We discuss possible applications of the slope concept, such as using it as a part of the loss function or stopping criterion during network training, or ranking data sets in terms of their complexity.

Författare

Anton Johansson

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Niklas Engsner

Chalmers, Data- och informationsteknik, Data Science

Claes Strannegård

Chalmers, Data- och informationsteknik, Data Science och AI

Petter Mostad

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Göteborgs universitet

34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022


9781665471268 (ISBN)

34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022
Stockholm, Sweden,

Ämneskategorier

Annan data- och informationsvetenskap

Kommunikationssystem

DOI

10.1109/SAIS55783.2022.9833034

Mer information

Senast uppdaterat

2024-01-03