Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency
Paper in proceeding, 2025

Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.

BCE

Author

Georg Bökman

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

David Nordström

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Fredrik Kahl

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022

26403498 (eISSN)

Vol. 267 4823-4838

42nd International Conference on Machine Learning, ICML 2025
Vancouver, Canada,

Subject Categories (SSIF 2025)

Communication Systems

Computer Sciences

More information

Latest update

12/11/2025