G-equivariant convolutional neural networks
Geometric deep learning aims to reduce the amount of information that neural networks have to learn, by taking advantage of geometric properties in data. In particular, equivariant neural networks use (local or global) symmetry to reduce the complexity of a learning task.
In this thesis, we investigate a popular deep learning model for tasks exhibiting global symmetry: G-equivariant convolutional neural networks (GCNNs). We analyze the mathematical foundations of GCNNs and discuss where this model fits in the broader scheme of equivariant learning. More specifically, we discuss a general framework for equivariant neural networks using notions from gauge theory, and then show how GCNNs arise from this framework in the presence of global symmetry. We also characterize convolutional layers, the main building blocks of GCNNs, in terms of more general G-equivariant layers that preserve the underlying global symmetry.
homogeneous vector bundles
convolutional neural networks
Chalmers, Matematiska vetenskaper, Algebra och geometri
Aronsson, J. Homogeneous vector bundles and G-equivariant convolutional neural networks
Chalmers tekniska högskola
Euler, Skeppsgränd 3.
Opponent: Prof. Fredrik Kahl, Department of Electrical Engineering, Chalmers University of Technology