Homogeneous vector bundles and G-equivariant convolutional neural networks
Journal article, 2022

G-equivariant convolutional neural networks (GCNNs) is a geometric deep learning model for data defined on a homogeneous G-space M. GCNNs are designed to respect the global symmetry in M, thereby facilitating learning. In this paper, we analyze GCNNs on homogeneous spaces M= G/ K in the case of unimodular Lie groups G and compact subgroups K≤ G. We demonstrate that homogeneous vector bundles are the natural setting for GCNNs. We also use reproducing kernel Hilbert spaces (RKHS) to obtain a sufficient criterion for expressing G-equivariant layers as convolutional layers. Finally, stronger results are obtained for some groups via a connection between RKHS and bandwidth.

Convolutional neural networks

Fiber bundles

Geometry

Symmetry

Equivariance

Deep learning

Author

Jimmy Aronsson

Chalmers, Mathematical Sciences, Algebra and geometry

University of Gothenburg

Sampling Theory, Signal Processing, and Data Analysis

27305716 (ISSN) 27305724 (eISSN)

Vol. 20 2 10

Subject Categories

Algebra and Logic

Geometry

Mathematical Analysis

DOI

10.1007/s43670-022-00029-3

More information

Latest update

7/26/2022