Characterizing Piecewise Linear Neural Networks
Licentiate thesis, 2022

Neural networks utilizing piecewise linear transformations between layers have in many regards become the default network type to use across a wide range of applications. Their superior training dynamics and generalization performance irrespective of the nature of the problem has resulted in these networks achieving state of the art results on a diverse set of tasks. Even though the efficacy of these networks have been established, there is a poor understanding of their intrinsic behaviour and properties. Little is known regarding how these functions evolve during training, how they behave at initialization and how all of this is related to the architecture of the network. Exploring and detailing these properties is not only of theoretical interest, it can also aid in developing new schemes and algorithms to further improve the performance of the networks. In this thesis we thus seek to further explore and characterize these properties. We theoretically prove how the local properties of piecewise linear networks vary at initialization and explore empirically how more complex properties behave during training. We use these results to reason about which intrinsic properties are associated with the generalization performance and develop new regularization schemes. We further substantiate the empirical success of piecewise linear networks by showcasing how their application can solve two tasks relevant to the safety and effectiveness of processes related to the automotive industry.

automotive applications.

neural network

machine learning

piecewise linear

Room: Pascal. Zoom pw: 416891
Opponent: Associate Professor, Raazesh Sainudiin, Uppsala University

Author

Anton Johansson

Chalmers, Mathematical Sciences, Applied Mathematics and Statistics

Does the dataset meet your expectations? Explaining sample representation in image data

Proceedings of the 32nd Benelux Conference, BNAIC/Benelearn 2020,;(2020)p. 194-208

Paper in proceeding

Slope and generalization properties of neural networks

SilGAN: Generating driving maneuvers for scenario-based software-in-The-loop testing

Proceedings - 3rd IEEE International Conference on Artificial Intelligence Testing, AITest 2021,;(2021)p. 65-72

Paper in proceeding

Improved Spectral Norm Regularization for Neural Networks

Subject Categories

Computer Engineering

Other Mathematics

Computer Science

Publisher

Chalmers

Room: Pascal. Zoom pw: 416891

Online

Opponent: Associate Professor, Raazesh Sainudiin, Uppsala University

More information

Latest update

6/29/2022