Controlled Descent Training
Licentiatavhandling, 2023

In this work, a novel and model-based artificial neural network (ANN) train-
ing method is developed supported by optimal control theory. The method
augments training labels in order to robustly guarantee training loss conver-
gence and improve training convergence rate. Dynamic label augmentation
is proposed within the framework of gradient descent training where the con-
vergence of training loss is controlled. First, we capture the training behavior
with the help of empirical Neural Tangent Kernels (NTK) and borrow tools
from systems and control theory to analyze both the local and global training
dynamics (e.g. stability, reachability). Second, we propose to dynamically
alter the gradient descent training mechanism via fictitious labels as control
inputs and an optimal state feedback policy. In this way, we enforce locally H2
optimal and convergent training behavior. The novel algorithm, Controlled
Descent Training (CDT), guarantees local convergence. CDT unleashes new
potentials in the analysis, interpretation, and design of ANN architectures.
The applicability of the method is demonstrated on standard regression and
classification problems.

optimal control

convergent learning

label selection. i

Label augmentation

gradient descent training

neural tangent kernel

EB, Hörsalsvägen 11
Opponent: Prof. Richard Pates, Automatic Control Division, Lund Tekniska Högskola, Sverige

Författare

Viktor Andersson

Chalmers, Elektroteknik, System- och reglerteknik

Robustly and Optimally Controlled Training Of neural Networks I (OCTON I)

Centiro, 2019-10-15 -- 2023-10-15.

Ämneskategorier

Reglerteknik

Utgivare

Chalmers

EB, Hörsalsvägen 11

Opponent: Prof. Richard Pates, Automatic Control Division, Lund Tekniska Högskola, Sverige

Mer information

Senast uppdaterat

2024-01-25