Practical Equivalence Testing and its Application in Synthetic Pre-Crash Scenario Validation
Paper in proceeding, 2025

The use of representative pre-crash scenarios is critical for assessing the safety impact of driving automation systems through simulation. However, a gap remains in the robust evaluation of the similarity between synthetic and real-world pre-crash scenarios and their crash characteristics, such as Delta-v and injury risk. Without proper validation, it cannot be ensured that the synthetic test scenarios adequately represent real-world driving behaviors and crash characteristics, which may lead to misleading or biased assessments. One reason for this validation gap is the lack of focus on methods to confirm that the synthetic test scenarios are practically equivalent (or, rather, 'similar enough') to real-world ones, given the assessment scope. Traditional statistical methods, like significance testing, focus on detecting differences rather than establishing equivalence; since failure to detect a difference does not imply equivalence, they are of limited applicability for validating synthetic pre-crash scenarios and crash characteristics. This study addresses this gap by proposing an equivalence testing method based on the Bayesian Region of Practical Equivalence (ROPE) framework. This method is designed to assess the practical equivalence of scenario characteristics that are most relevant for the intended assessment, making it particularly appropriate for the domain of virtual safety assessments. We first review existing equivalence testing methods. Then we propose and demonstrate the Bayesian ROPE-based method by testing the equivalence of two rear-end pre-crash datasets. Our approach focuses on the most relevant scenario characteristics, such as key pre-crash kinematics (e.g., the time point at which the following vehicle is not able to avoid a crash) and crash characteristics (e.g., Delta-v and injury risk). Our analysis provides insights into the practicalities and effectiveness of equivalence testing in synthetic test scenario validation and demonstrates the importance of testing for improving the credibility of synthetic data for automated vehicle safety assessment, as well as the credibility of subsequent safety impact assessments.

Driving automation systems

synthetic test scenario validation

safety impact assessments

equivalence testing

Author

Jian Wu

Volvo Group

Ulrich Sander

Volvo Group

Carol Flannagan

University of Michigan

Minxiang Zhao

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Safety

Jonas Bärgman

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Safety

Iavvc 2025 IEEE International Automated Vehicle Validation Conference Proceedings


9798331525262 (ISBN)

2025 IEEE International Automated Vehicle Validation Conference, IAVVC 2025
Baden-Baden, Germany,

Subject Categories (SSIF 2025)

Probability Theory and Statistics

Computer Sciences

Vehicle and Aerospace Engineering

DOI

10.1109/IAVVC61942.2025.11219586

More information

Latest update

12/29/2025