Generalization Error Bounds via mth Central Moments of the Information Density
Paper i proceeding, 2020

We present a general approach to deriving bounds on the generalization error of randomized learning algorithms. Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used - as often assumed in the probably approximately correct (PAC)-Bayesian literature - and in the single-draw case, where the hypothesis is extracted only once.For this last scenario, we present a novel bound that is explicit in the central moments of the information density. The bound reveals that the higher the order of the information density moment that can be controlled, the milder the dependence of the generalization bound on the desired confidence level.Furthermore, we use tools from binary hypothesis testing to derive a second bound, which is explicit in the tail of the information density. This bound confirms that a fast decay of the tail of the information density yields a more favorable dependence of the generalization bound on the confidence level.

Författare

Fredrik Hellström

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

Giuseppe Durisi

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

IEEE International Symposium on Information Theory - Proceedings

21578095 (ISSN)

Vol. 2020-June 2741-2746 9174475

2020 IEEE International Symposium on Information Theory, ISIT 2020
Los Angeles, USA,

INNER: information theory of deep neural networks

Chalmers AI-forskningscentrum (CHAIR), 2019-01-01 -- 2021-12-31.

Ämneskategorier

Sannolikhetsteori och statistik

Signalbehandling

Diskret matematik

DOI

10.1109/ISIT44484.2020.9174475

Mer information

Senast uppdaterat

2021-12-20