Nonlinear Multi-scale Super-resolution Using Deep Learning
Artikel i vetenskaplig tidskrift, 2019

We propose a deep learning architecture capable of performing up to 8× single image super-resolution. Our architecture incorporates an adversarial component from the super-resolution generative adversarial networks (SRGANs) and a multi-scale learning component from the multiple scale super-resolution network (MSSRNet), which only together can recover smaller structures inherent in satellite images. To further enhance our performance, we integrate progressive growing and training to our network. This, aided by feed forwarding connections in the network to move along and enrich information from previous inputs, produces super-resolved images at scaling factors of 2, 4, and 8. To ensure and enhance the stability of GANs, we employ Wasserstein GANs (WGANs) during training. Experimentally, we find that our architecture can recover small objects in satellite images during super-resolution whereas previous methods cannot.

Författare

Kenneth Tran

North Carolina State University

Ashkan Panahi

North Carolina State University

Aniruddha Adiga

North Carolina State University

Wesam Sakla

Lawrence Livermore National Laboratory

Hamid Krim

North Carolina State University

ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

15206149 (ISSN)

Ämneskategorier

Telekommunikation

Sannolikhetsteori och statistik

Signalbehandling

Datavetenskap (datalogi)

DOI

10.1109/ICASSP.2019.8682354

Mer information

Senast uppdaterat

2020-05-14