Fast-Rate Loss Bounds via Conditional Information Measures with Applications to Neural Networks
Paper in proceeding, 2021

We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. Drawing from Steinke Zakynthinou (2020), this framework leads to bounds that depend on the conditional information density between the output hypothesis and the choice of the training set, given a larger set of data samples from which the training set is formed. Furthermore, the bounds pertain to the average test loss as well as to its tail probability, both for the PAC-Bayesian and the single-draw settings. If the conditional information density is bounded uniformly in the size n of the training set, our bounds decay as 1/n, This is in contrast with the tail bounds involving conditional information measures available in the literature, which have a less benign 1/√n dependence. We demonstrate the usefulness of our tail bounds by showing that they lead to nonvacuous estimates of the test loss achievable with some neural network architectures trained on MNIST and Fashion-MNIST.

Author

Fredrik Hellström

Chalmers, Electrical Engineering, Communication, Antennas and Optical Networks

Giuseppe Durisi

Chalmers, Electrical Engineering, Communication, Antennas and Optical Networks

IEEE International Symposium on Information Theory - Proceedings

21578095 (ISSN)

Vol. 2021-July 952-957
9781538682098 (ISBN)

2021 IEEE International Symposium on Information Theory, ISIT 2021
Virtual, Melbourne, Australia,

Subject Categories

Other Computer and Information Science

Probability Theory and Statistics

Computer Science

DOI

10.1109/ISIT45174.2021.9517731

More information

Latest update

9/27/2021