Finite-Time Lyapunov Exponents of Deep Neural Networks
Artikel i vetenskaplig tidskrift, 2024

We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep feed-forward networks and dynamical systems, where the growth or decay of local perturbations is characterized by finite-time Lyapunov exponents. We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems. Ridges of large positive exponents divide input space into different regions that the network associates with different classes. These ridges visualize the geometry that deep networks construct in input space, shedding light on the fundamental mechanisms underlying their learning capabilities.

Lyapunov methods

Feedforward neural networks

Differential equations

Lyapunov functions

Deep neural networks

Författare

L. Storm

Göteborgs universitet

Hampus Linander

Chalmers, Matematiska vetenskaper, Algebra och geometri

Göteborgs universitet

J. Bec

CNRS

Université de recherche Paris Sciences et Lettres

K. Gustavsson

Göteborgs universitet

Bernhard Mehlig

Göteborgs universitet

Physical Review Letters

0031-9007 (ISSN) 1079-7114 (eISSN)

Vol. 132 5 057301

Ämneskategorier

Beräkningsmatematik

Systemvetenskap

DOI

10.1103/PhysRevLett.132.057301

PubMed

38364126

Mer information

Senast uppdaterat

2024-02-16