Operator compression with deep neural networks
Preprint, 2021

This paper studies the compression of partial differential operators using neural networks. We consider a family of operators, parameterized by a potentially high-dimensional space of coefficients that may vary on a large range of scales. Based on existing methods that compress such a multiscale operator to a finite-dimensional sparse surrogate model on a given target scale, we propose to directly approximate the coefficient-to-surrogate map with a neural network. We emulate local assembly structures of the surrogates and thus only require a moderately sized network that can be trained efficiently in an offline phase. This enables large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches. We apply the abstract framework to a family of prototypical second-order elliptic heterogeneous diffusion operators as a demonstrating example.

model order reduction

Deep learning

numerical homogenization

neural networks

Author

Fabian Kröpfl

University of Augsburg

Roland Maier

University of Gothenburg

Chalmers, Mathematical Sciences, Applied Mathematics and Statistics

Daniel Peterseim

University of Augsburg

Subject Categories

Computational Mathematics

More information

Latest update

9/9/2022 7