Refinet: A Deep Segmentation Assisted Refinement Network for Salient Object Detection
Journal article, 2019

Deep convolutional neural networks (CNNs) recently have been successfully applied to saliency detection with improved performance on locating salient objects, as comparing to conventional saliency detection by handcrafted features. Unfortunately, due to repeated sub-sampling operations inside CNNs such as pooling and convolution, many CNN-based saliency models fail to maintain fine-grained spatial details and boundary structures of objects. To remedy this issue, this paper proposes a novel end-to-end deep learning-based refinement model named Refinet, which is based on fully convolutional network augmented with segmentation hypotheses. Intermediate saliency maps which are edge-aware are computed from segmentation-based pooling and then cancatenating two streams into a fully convolutional network for effective fusion and refinement, leading to more precise object details and boundaries. In addition, the resolution of feature maps in the proposed Refinet is carefully designed to guarantee sufficient boundary clarity of the refined saliency output. Compared to widely employed dense conditional random field (CRF), Refinet is able to enhance coarse saliency maps generated by existing models with more accurate spatial details, and its effectiveness is demonstrated by experimental results on 7 benchmark datasets.

Fully convolutional neural network

Salient object detection

Image segmentation

Refinement

Author

Keren Fu

Sichuan University

Qijun Zhao

Sichuan University

Irene Yu-Hua Gu

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

IEEE Transactions on Multimedia

1520-9210 (ISSN)

Vol. 21 2 457-469 8419317

Areas of Advance

Information and Communication Technology

Transport

Subject Categories

Information Science

Electrical Engineering, Electronic Engineering, Information Engineering

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1109/TMM.2018.2859746

More information

Latest update

4/5/2022 6