Data augmentation with Mobius transformations
Journal article, 2021

Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remains a highly adaptable method to evolving model architectures and varying amounts of data—in particular, extremely scarce amounts of available training data. In this paper, we present a novel method of applying Möbius transformations to augment input images during training. Möbius transformations are bijective conformal maps that generalize image translation to operate over complex inversion in pixel space. As a result, Möbius transformations can operate on the sample level and preserve data labels. We show that the inclusion of Möbius transformations during training enables improved generalization over prior sample-level data augmentation techniques such as cutout and standard crop-and-flip transformations, most notably in low data regimes.

Author

Sharon Zhou

Stanford University

Jiequan Zhang

Stanford University

Hang Jiang

Stanford University

Torbjörn Lundh

Chalmers, Mathematical Sciences, Applied Mathematics and Statistics

University of Gothenburg

Andrew Ng

Stanford University

Machine Learning: Science and Technology

26322153 (ISSN)

Vol. 2 2 025016

Subject Categories

Other Computer and Information Science

Computer Science

Computer Vision and Robotics (Autonomous Systems)

Mathematical Analysis

Areas of Advance

Information and Communication Technology

Life Science Engineering (2010-2018)

Roots

Basic sciences

DOI

10.1088/2632-2153/abd615

Related datasets

URI: https://iopscience.iop.org/article/10.1088/2632-2153/abd615/meta DOI: 10.1088/2632-2153/abd615

More information

Latest update

12/15/2022