A Quantitative Approach to Computer Security from a Dependability Perspective
Doctoral thesis, 1996
Security and dependability represent two very important attributes of modern computer systems, especially in the light of the increasing complexity and criticality of these systems. These two disciplines have traditionally been treated separately, although lately some attempts have been made to integrate them. Still, a successful integration is necessary to create the conceptual framework required to understand and solve problems of impaired security and dependability. Therefore, this thesis suggests a system-related conceptual model, in which the various aspects of security and dependability are analyzed and regrouped into a new "input-output"-related concept. The input characteristics of this new concept are interpreted in preventive terms, whereas the output characteristics are interpreted in behavioural terms with respect to the user of the system.
The logical consequence of this approach was that the measures we aim for could also be grouped as preventive measures and behavioural measures. The behavioural measures are measures that relate to the behaviour of the system as understood by the user of the system, or, put informally, are related to the "output" of the system. Behavioural measures deal with system failures, e.g., the probability for and the magnitude of such failures. They are intended to reflect attributes such as reliability, performability and safety, but also confidentiality, although this latter attribute deviates from the other three. Here apply such traditional reliability measures as Mean Time To Failure (MTTF) and probability of a successful mission, as well as traditional reliability methods, such as Markov modelling. One problem is that the assumption of exponential degradation in Markov models may not be valid, especially in software systems and systems in which security is a concern. It is outlined how this problem can be solved by introducing phase-type assumptions. The suggested measures are intended to be of the "benchmark" type aimed for practical design trade-offs, rather than a description of all behavioural aspects of the system.
A preventive measure would describe the system's ability to avoid detrimental influence from the environment, in particular influence originating from security intrusions into the system. Thus, measures of such operational "intrusion security" would capture the intuitive notion of "the system's ability to resist attack". It has been suggested that the breach process could be modelled using effort expended by the attacker as a variable. The effort variable is believed to be rather complex. The assumption is that it would encompass factors such as the education, skill and experience of the attacker, resources used in the attacking process as well as various time parameters, e.g., CPU time, on-line time and the amount of man-hours used in the process. It is clear that empirical data would be useful deriving a plausible probabilistic approach to this type of security modelling. Thus, two experiments, which to our knowledge are the first of their kind, were performed. In these, a group of people were permitted to perform security attacks on a given system in a controlled way. The attack process was monitored and relevant data were recorded. We thereby demonstrated that it is possible to gather intrusion data by means of such experimentation. The results of the experiments indi- cate that the attacking process can be split into three phases. Most of the attacks performed can be referred to one of these phases, namely the standard attack phase. Furthermore, by means of statistical testing, we show that it is not improbable that the attacking behaviour can be modelled by an exponential distribution during this phase.