G-equivariant convolutional neural networks
Licentiatavhandling, 2021
Geometric deep learning aims to reduce the amount of information that neural networks have to learn, by taking advantage of geometric properties in data. In particular, equivariant neural networks use (local or global) symmetry to reduce the complexity of a learning task.
In this thesis, we investigate a popular deep learning model for tasks exhibiting global symmetry: G-equivariant convolutional neural networks (GCNNs). We analyze the mathematical foundations of GCNNs and discuss where this model fits in the broader scheme of equivariant learning. More specifically, we discuss a general framework for equivariant neural networks using notions from gauge theory, and then show how GCNNs arise from this framework in the presence of global symmetry. We also characterize convolutional layers, the main building blocks of GCNNs, in terms of more general G-equivariant layers that preserve the underlying global symmetry.
deep learning
induced representations
homogeneous vector bundles
convolutional neural networks
homogeneous spaces
symmetry
Författare
Jimmy Aronsson
Chalmers, Matematiska vetenskaper, Algebra och geometri
Aronsson, J. Homogeneous vector bundles and G-equivariant convolutional neural networks
Ämneskategorier
Matematik
Geometri
Fundament
Grundläggande vetenskaper
Utgivare
Chalmers
Euler, Skeppsgränd 3.
Opponent: Prof. Fredrik Kahl, Department of Electrical Engineering, Chalmers University of Technology