Effective online controlled experiment analysis at large scale
Paper in proceeding, 2018

Online Controlled Experiments (OCEs) are the norm in data-driven software companies because of the benefits they provide for building and deploying software. Product teams experiment to accurately learn whether the changes that they do to their products (e.g. adding new features) cause any impact (e.g. customers use them more frequently). Experiments also help reduce the risk from deploying software by minimizing the magnitude and duration of harm caused by software bugs, allowing software to be shipped more frequently. To make informed decisions in product development, experiment analysis needs to be granular with a large number of metrics over heterogeneous devices and audiences. Discovering experiment insights by hand, however, can be cumbersome. In this paper, and based on case study research at a large-scale software development company with a long tradition of experimentation, we (1) describe the standard process of experiment analysis, and (2) introduce an artifact to improve the effectiveness and comprehensiveness of this process.

Guided experiment analysis

A/B testing

Online controlled experiments

Author

A. Fabijan

Malmö university

Pavel Dmitriev

Microsoft

Helena Holmström Olsson

Malmö university

Jan Bosch

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Proceedings - 44th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2018

64-67 8498187
978-1-5386-7384-3 (ISBN)

44th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2018
Prague, Czech Republic,

Subject Categories

Other Engineering and Technologies not elsewhere specified

Software Engineering

Information Science

DOI

10.1109/SEAA.2018.00020

More information

Latest update

3/21/2023