Verifying (in-)stability in floating-point programs by increasing precision using SMT solvers
Paper i proceeding, 2013
When computing with floating-point numbers, programmers choose a certain floating-point precision (like, for instance, float or double) upfront, for each variable. However, whether the chosen precision is appropriate for the computation at hand, and vice versa, is difficult to judge. One way is to increase the precision, and observe whether the result of the computation changes too much, in which case the computation with the original precisions is considered 'unstable'. This effect may be exhibited with certain inputs, and not with others. With a classical testing approach, inputs that show instability can be very difficult to find. Moreover, testing can only show instability, not stability. In this paper, we present an approach, and its implementation, which can formally prove that an increased precision causes only a limited (quantified) change of the result. Alternatively, if the computation is not stable, the method returns inputs that exhibit this. We use methods from program verification, connecting to a novel SMT (satisfiability modulo theories) solver for floating-point number constraints. The user augments the program P with assertions on the expected stability bound. The system then creates a new program P', a certain kind of merge of P with a higher precision copy of P, computes the weakest precondition of P' w.r.t. these assertions, and feeds the resulting formula to the SMT solver, which then proves stability or alternatively returns data for a test exhibiting unstability, to be used for further analysis. The implementation of the system targets a toy language but supports the IEEE standard in a realistic manner. The paper describes the method and its implementation, reports experiments, and discusses the results.