An Exploratory Study of the Impact of Parameterization on JMH Measurement Results in Open-Source Projects
Paper in proceeding, 2021

The Java Microbenchmarking Harness (JMH) is a widely used tool for testing performance-critical code on a low level. One of the key features of JMH is the support for user-defined parameters, which allows executing the same benchmark with different workloads. However, a benchmark configured with n parameters with m different values each requires JMH to execute the benchmark mn times (once for each combination of configured parameter values). Consequently, even fairly modest parameterization leads to a combinatorial explosion of benchmarks that have to be executed, hence dramatically increasing execution time. However, so far no research has investigated how this type of parameterization is used in practice, and how important different parameters are to benchmarking results. In this paper, we statistically study how strongly different user parameters impact benchmark measurements for 126 JMH benchmarks from five well-known open source projects. We show that 40% of the studied metric parameters have no correlation with the resulting measurement, i.e., testing with different values in these parameters does not lead to any insights. If there is a correlation, it is often strongly predictable following a power law, linear, or step function curve. Our results provide a first understanding of practical usage of user-defined JMH parameters, and how they correlate with the measurements produced by benchmarks. We further show that a machine learning model based on Random Forest ensembles can be used to predict the measured performance of an untested metric parameter value with an accuracy of 93% or higher for all but one benchmark class, demonstrating that given sufficient training data JMH performance test results for different parameterizations are highly predictable.

benchmark parametrization

machine learning

benchmark measurements

java microbenchmarking harness (JMH)

Author

Peter Samoaa

Data Science and AI 1

Philipp Leitner

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

ICPE 2021 - Proceedings of the ACM/SPEC International Conference on Performance Engineering

213-224
9781450381949 (ISBN)

2021 ACM/SPEC International Conference on Performance Engineering, ICPE 2021
Virtual, Online, France,

Subject Categories

Medical Laboratory and Measurements Technologies

Signal Processing

Computer Science

DOI

10.1145/3427921.3450243

More information

Latest update

1/28/2023