Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks
Paper i proceeding, 2019

Several areas have been improved with Deep Learning during the past years. For non-safety related products adoption of AI and ML is not an issue, whereas in safety critical applications, robustness of such approaches is still an issue. A common challenge for Deep Neural Networks (DNN) occur when exposed to out-of-distribution samples that are previously unseen, where DNNs can yield high confidence predictions despite no prior knowledge of the input. In this paper we analyse two supervisors on two well-known DNNs with varied setups of training and find that the outlier detection performance improves with the quality of the training procedure. We analyse the performance of the supervisor after each epoch during the training cycle, to investigate supervisor performance as the accuracy converges. Understanding the relationship between training results and supervisor performance is valuable to improve robustness of the model and indicates where more work has to be done to create generalized models for safety critical applications.

Författare

Jens Henriksson

Cyber Physical Systems

Christian Berger

Cyber Physical Systems

Markus Borg

RISE Research Institutes of Sweden

Lars Tornberg

AstraZeneca AB

Sankar Raman Sathyamoorthy

Qrtech AB

Cristofer Englund

RISE Research Institutes of Sweden

Proceedings - 45th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2019


978-1-7281-3421-5 (ISBN)

2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)
Kallithéa, Greece,

Ämneskategorier

Annan data- och informationsvetenskap

Bioinformatik (beräkningsbiologi)

Datorsystem

DOI

10.1109/SEAA.2019.00026

Mer information

Skapat

2022-03-11