Performance Evaluation of Serverless Applications and Infrastructures
Doctoral thesis, 2022

Context. Cloud computing has become the de facto standard for deploying modern web-based software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new serverless services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Measuring these characteristics is difficult in dynamic cloud environments due to performance variability in large-scale distributed systems with limited observability.

Objective. This thesis aims to enable reproducible performance evaluation of serverless applications and their underlying cloud infrastructure.

Method. A combination of literature review and empirical research established a consolidated view on serverless applications and their performance. New solutions were developed through engineering research and used to conduct performance benchmarking field experiments in cloud environments.

Findings. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that most studies do not follow reproducibility principles on cloud experimentation. Characterizing 89 serverless applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. A novel trace-based serverless application benchmark shows that external service calls often dominate the median end-to-end latency and cause long tail latency. The latency breakdown analysis further identifies performance challenges of serverless applications, such as long delays through asynchronous function triggers, substantial runtime initialization for coldstarts, increased performance variability under bursty workloads, and heavily provider-dependent performance characteristics. The evaluation of different cloud benchmarking methodologies has shown that only selected micro-benchmarks are suitable for estimating application performance, performance variability depends on the resource type, and batch testing on the same instance with repetitions should be used for reliable performance testing.

Conclusions. The insights of this thesis can guide practitioners in building performance-optimized serverless applications and researchers in reproducibly evaluating cloud performance using suitable execution methodologies and different benchmark types.

Room 243, Jupiter Building, Chalmers Campus Lindholmen
Opponent: Prof. Petr Tůma, Charles University Prague, Czech Republic

Author

Joel Scheuner

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Function-as-a-Service Performance Evaluation: A Multivocal Literature Review

Journal of Systems and Software,; Vol. 170(2020)

Journal article

The State of Serverless Applications: Collection, Characterization, and Community Consensus

IEEE Transactions on Software Engineering,; Vol. 48(2022)p. 4152-4166

Journal article

Let’s Trace It: Fine-Grained Serverless Benchmarking using Synchronous and Asynchronous Orchestrated Applications

CrossFit: Fine-grained Benchmarking of Serverless Application Performance across Cloud Providers

Proceedings - 2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing, UCC 2022,; (2022)p. 51-60

Paper in proceeding

TriggerBench: A Performance Benchmark for Serverless Function Triggers

Proceedings - 2022 IEEE International Conference on Cloud Engineering, IC2E 2022,; (2022)p. 96-103

Paper in proceeding

A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

ACM/SPEC International Conference on Performance Engineering Companion,; (2018)p. 161-166

Paper in proceeding

Estimating Cloud Application Performance Based on Micro-Benchmark Profiling

2018 IEEE 11th International Conference on Cloud Computing (CLOUD),; (2018)p. 90-97

Paper in proceeding

Software Microbenchmarking in the Cloud. How Bad is it Really?

Empirical Software Engineering,; Vol. 24(2019)p. 2469-2508

Journal article

Cloud performance evaluation
Cloud computing delivers IT resources (e.g., storage, computation, software) on-demand over the Internet with pay-as-you-go pricing. Such cloud services power large parts of the modern Internet. However, the performance of different cloud services varies greatly and is hard to measure because cloud infrastructures are complex and physically distributed.
This thesis aims to facilitate performance evaluations in the cloud and offers new insights into how cloud solutions perform. This thesis identifies gaps and challenges in prior research on cloud performance, characterizes real cloud applications from different sources, and contributes novel approaches to facilitate cloud performance evaluation.
Practitioners can use the results of this thesis to improve the performance of their cloud applications, and researchers can adopt our recommendations to improve future performance studies in the cloud.

Areas of Advance

Information and Communication Technology

Subject Categories

Software Engineering

Computer Science

ISBN

978-91-7905-677-3

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5143

Publisher

Chalmers

Room 243, Jupiter Building, Chalmers Campus Lindholmen

Online

Opponent: Prof. Petr Tůma, Charles University Prague, Czech Republic

More information

Latest update

11/12/2023