Performance analysis of out-of-distribution detection on trained neural networks
Journal article, 2021

Context: Deep Neural Networks (DNN) have shown great promise in various domains, for example to support pattern recognition in medical imagery. However, DNNs need to be tested for robustness before being deployed in safety critical applications. One common challenge occurs when the model is exposed to data samples outside of the training data domain, which can yield to outputs with high confidence despite no prior knowledge of the given input. Objective: The aim of this paper is to investigate how the performance of detecting out-of-distribution (OOD) samples changes for outlier detection methods (e.g., supervisors) when DNNs become better on training samples. Method: Supervisors are components aiming at detecting out-of-distribution samples for a DNN. The experimental setup in this work compares the performance of supervisors using metrics and datasets that reflect the most common setups in related works. Four different DNNs with three different supervisors are compared during different stages of training, to detect at what point during training the performance of the supervisors begins to deteriorate. Results: Found that the outlier detection performance of the supervisors increased as the accuracy of the underlying DNN improved. However, all supervisors showed a large variation in performance, even for variations of network parameters that marginally changed the model accuracy. The results showed that understanding the relationship between training results and supervisor performance is crucial to improve a model's robustness. Conclusion: Analyzing DNNs for robustness is a challenging task. Results showed that variations in model parameters that have small variations on model predictions can have a large impact on the out-of-distribution detection performance. This kind of behavior needs to be addressed when DNNs are part of a safety critical application and hence, the necessary safety argumentation for such systems need be structured accordingly.

Automotive perception

Safety-critical systems

Deep neural networks

Out-of-distribution

Robustness

Author

Jens Henriksson

Cyber Physical Systems

Semcon

Christian Berger

University of Gothenburg

Semcon

M. Borg

RISE Research Institutes of Sweden

Lars Tornberg

Volvo Cars

Sankar Raman Sathyamoorthy

Qrtech AB

Cristofer Englund

RISE Research Institutes of Sweden

Information and Software Technology

0950-5849 (ISSN)

Vol. 130 106409

Subject Categories

Bioinformatics (Computational Biology)

Computer Science

Computer Systems

DOI

10.1016/j.infsof.2020.106409

More information

Latest update

1/12/2022