When Should I Run My Application Benchmark?: Studying Cloud Performance Variability for the Case of Stream Processing Applications


FSE Companion '25: Proceedings of the 33rd ACM International Conference on the Foundations of Software Engineering | 2025

Performance benchmarking is a common practice in software engineering, particularly when building large-scale, distributed, and data-intensive systems. While cloud environments offer several advantages for running benchmarks, it is often reported that benchmark results can vary significantly between repetitions—making it difficult to draw reliable conclusions about real-world performance.

In this paper, we empirically quantify the impact of cloud performance variability on benchmarking results, focusing on stream processing applications as a representative type of data-intensive, performance-critical system. In a longitudinal study spanning more than three months, we repeatedly executed an application benchmark used in research and development at Dynatrace. This allows us to assess various aspects of performance variability, particularly concerning temporal effects. With approximately 591 hours of experiments, deploying 789 Kubernetes clusters on AWS and executing 2 366 benchmarks, this is likely the largest study of its kind and the only one addressing performance from an end-to-end, i.e., application benchmark perspective.

Our study confirms that performance variability exists, but it is less pronounced than often assumed (coefficient of variation of < 3.7%). Unlike related studies, we find that performance does exhibit a daily and weekly pattern, although with only small variability (≤ 2.5%). Re-using benchmarking infrastructure across multiple repetitions introduces only a slight reduction in result accuracy (≤ 2.5 percentage points). These key observations hold consistently across different cloud regions and machine types with different processor architectures. We conclude that for engineers and researchers focused on detecting substantial performance differences (e.g., > 5%) in their application benchmarks, which is often the case in software engineering practice, performance variability and the precise timing of experiments are far less critical.

Meet the contributors

See all publications

Get involved

We enable the best engineers and researchers to work on challenging problems and develop cutting-edge solutions ready to be applied to real-world use cases. If you are curious about the many exciting opportunities waiting for you.
Full wave bg