Risk-based Decision-making Fallacies: Why Present Functional Safety Standards Are Not Enough
Paper in proceedings, 2017
Functional safety of a system is the part of its overall safety, understood as freedom from unacceptable/unreasonable risks that depends on a system operating correctly in response to its inputs. Functional safety elements are examined at every stage of the software development life cycle, including requirement specification, design, implementation, verification, validation and deployment. Acceptability of risks is judged within a framework of analysis with contextual and cultural aspects by individuals who may introduce subjectivity and misconceptions in the assessment. While functional safety standards elaborate much on the avoidance of unreasonable risk, little is addressed on the issue of avoiding unreasonable judgments of risk. Through the studies of common fallacies in risk perception and ethics, we present a moral-psychological analysis of functional safety standards and propose plausible improvements of the involved risk-related decision making processes, with a focus on the notion of an acceptable residual risk. As a functional safety reference model, we use the functional safety standard ISO 26262, addressing potential hazards caused by malfunctions of hardware and software systems within road vehicles, and defines safety measures that are required to achieve an acceptable level of safety. Analysis points out the critical importance of a robust safety culture with developed countermeasures to the common fallacies in risk perception, which are not addressed by contemporary functional safety standards. We argue that functional safety standards should be complemented with the analysis of potential hazards caused by fallacies in risk perception, their countermeasures, and the requirement that residual risks must be explicated, motivated, and accompanied by a plan for their continuous reduction. This approach becomes especially important in contemporary developed autonomous vehicles with increasing computational applications.
risk-based decision making