Evaluation of Out-of-Distribution Detection Performance on Autonomous Driving Datasets
Paper in proceeding, 2023

Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs) for critical applications. Due to a lack of verification methods for high-dimensional DNNs, a trade-off is needed between accepted performance and handling of out-of-distribution (OOD) samples.This work evaluates rejecting outputs from semantic segmentation DNNs by applying a Mahalanobis distance (MD) based on the most probable class-conditional Gaussian distribution for the predicted class as an OOD score. The evaluation follows three DNNs trained on the Cityscapes dataset and tested on four automotive datasets and finds that classification risk can drastically be reduced at the cost of pixel coverage, even when applied on unseen datasets. The applicability of our findings will support legitimizing safety measures and motivate their usage when arguing for safe usage of DNNs in automotive perception.

automotive safety

out-of-distribution detection

semantic segmentation

Author

Jens Henriksson

Semcon

Christian Berger

University of Gothenburg

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Stig Ursing

Semcon

Markus Borg

Lund University

2023 IEEE International Conference On Artificial Intelligence Testing (AITest)

Vol. 2023 74-81

2023 IEEE International Conference On Artificial Intelligence Testing (AITest)
Aten, Greece,

Subject Categories

Computer Science

DOI

10.1109/AITest58265.2023.00021

More information

Latest update

9/26/2024