Program Analysis for Performance and Reliability
The increased demand for computing power has lead designers to put an ever increasing number of cores on processor dies. This advance has been made possible through miniaturization and effectivization of the underlying semi-conductor technology. As a by-product, however, the resulting computer systems are more vulnerable to interference. This has made reliability a first-order concern and is treated both in software and hardware through some form of redundancy. Redundancy is however detrimental to performance leading to more resources spent re-computing. Efficient use of hardware requires software that can take advantage of the computer system.
Compilers are responsible for translating high-level source-code into efficient machine-code. Transformations in the compiler can improve performance and/or reliability of the software. Prior to applying such transformation the compiler needs to verify the legality and benefit of this optimization through program analysis.
This thesis develops program analyses for reasoning about performance and reliability properties and show how these synthesize information that could not be made available from previous approaches.
First, I present an analysis based on abstract interpretation to determine the impact
of a finite number of faults. An analysis based on abstract interpretation guarantees
logical soundness by construction, and I evaluate its applicability by deducing the fault susceptibility of kernels and how a program optimization affect reliability.
Second, I present the fuzzy program analysis framework and show that it admits a sound approximation in the abstract interpretation framework. Fuzzy sets allow non-binary membership and, in extension, a qualitative static program analysis that can perform common-case analyses. Furthermore this framework
admits a dynamic analysis based on fuzzy control theory that refines the result from the static analysis online.
Using the framework I show improvement on a code motion algorithm and several classical program analyses that target performance properties.
Third, I present an analysis based on geometric programming for deciding the minimal
number of redundant executions of an program statement while maintaining a reliability
threshold. Often a fixed number of redundant executions per statement is employed
throughout the whole program. To minimize performance overhead I exploit that
some statements are naturally more reliable, and more costly, than others. Using the analysis I show improvement in reliability and performance overhead due to use of a redundancy level that is tailored for each statement individually.
static/dynamic program analysis