Fast-Rate Loss Bounds via Conditional Information Measures with Applications to Neural Networks
Paper i proceeding, 2021

We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. Drawing from Steinke Zakynthinou (2020), this framework leads to bounds that depend on the conditional information density between the output hypothesis and the choice of the training set, given a larger set of data samples from which the training set is formed. Furthermore, the bounds pertain to the average test loss as well as to its tail probability, both for the PAC-Bayesian and the single-draw settings. If the conditional information density is bounded uniformly in the size n of the training set, our bounds decay as 1/n, This is in contrast with the tail bounds involving conditional information measures available in the literature, which have a less benign 1/√n dependence. We demonstrate the usefulness of our tail bounds by showing that they lead to nonvacuous estimates of the test loss achievable with some neural network architectures trained on MNIST and Fashion-MNIST.


Fredrik Hellström

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

Giuseppe Durisi

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

IEEE International Symposium on Information Theory - Proceedings

21578095 (ISSN)

Vol. 2021-July 952-957
9781538682098 (ISBN)

2021 IEEE International Symposium on Information Theory, ISIT 2021
Virtual, Melbourne, Australia,


Annan data- och informationsvetenskap

Sannolikhetsteori och statistik

Datavetenskap (datalogi)



Mer information

Senast uppdaterat