Header background

How to Create Performance Models using Application Monitoring Data

Dynatrace collects a wealth of monitoring data on applications and one of the great aspects is that it also provides interfaces allowing external applications to use this information. However, potential usage scenarios are not limited to simplify the monitoring of existing applications, and one of our technology partners, the Performance Management Group (PMG) of fortiss GmbH, has developed a solution that uses Dynatrace data to build performance models.

Using performance models, potential future scenarios can be evaluated in advance without setting up expensive test environments or evaluation projects. These models allow for evaluating the performance of an application in scenarios that cannot be measured on existing systems. Exemplary questions that can be answered by performance models are for example:

What happens if…

  • … we add new servers/CPUs to our data center?
  • … we combine multiple VMs on one HW machine?
  • … we add new enterprise applications to our data center?
  • … additional users access an enterprise application?
  • … the user behavior changes?
  • … we migrate our on-premise deployment to a cloud deployment?

Consider, for example, the last question: “What happens if we migrate our on-premise deployment to a cloud deployment?” In order to evaluate this scenario and to estimate the expected performance, all system topology changes need to be considered. Performance models can help simplify this evaluation! Instead of migrating the full application in a first step and measuring the performance afterwards, one can simply use a model derived from measurements on the existing deployment that describes the performance-relevant aspects of the application. This model is adapted to the new system topology on the cloud and simulation results based on this adapted model give immediate feedback on the expected performance on the new platform.

You might wonder what a performance model looks like. A performance model depicts three layers that influence the performance of an overall system, namely the workload (user count and behavior), the performance-relevant aspects of a software architecture (e.g., component relationships, transaction control flows and resource demands) and the hardware environment (e.g., available servers and the deployment topology). It is possible to represent these layers in different ways. While traditional performance models depict these aspects in one monolithic model, modern architecture-level performance models allow to represent these aspects independently from each other. One example of such an architecture-level performance model is the Palladio Component Model (PCM). PCM is a performance modeling language that comes with a comprehensive tooling support, including graphical editors, simulation engines and result analyzers. PCM depicts the workload, software architecture and hardware environment in an UML-like notation that is annotated with performance-relevant meta-data (e.g., resource demands, probabilities,). For more details on the modeling notation and the associated tooling please visit the PCM website.

Unfortunately, constructing performance models of an enterprise application is associated with a lot of effort nowadays. In order to simplify their use and make them better applicable in your environment, the PMG developed a performance model generator that uses monitoring data to generate PCM-based performance models [1,2]. These models can be used to evaluate the application’s performance in different scenarios, like changes in workload or the hardware environment. The focus of our cooperation is to connect Dynatrace APM data with the performance model generator of the PMG as shown in Figure 1.

performancemodels1

Figure 1 – Performance Model Generation Framework [3]

Dynatrace is used to instrument a Java EE application and to collect monitoring data. The Performance Model Generator Framework extracts and aggregates the monitoring data (i.e., Pure Paths) via the Dynatrace REST interfaces. The information is transformed into a performance model that represents the instrumented Java EE application and contains the application components, the transaction control flow and their resource demands.

But how accurate can such a model-based evaluation be? The first results of our cooperation were demonstrated at the ACM/SPEC International Conference on Performance Engineering (ICPE) 2015 and the audience honored our demo with the “Best Demonstration Award” (see Figure 2). We demonstrated a scenario in which the SPECjEnterprise2010 benchmark application is instrumented using Dynatrace and automatically generated a performance model for this application. Predictions using a generated model matched measurements for mean response times with an accuracy between 72% and 91%. The joint publication “Using Dynatrace Monitoring Data for Generating Performance Models of Java EE Applications” is now available in the ACM library [3].

performancemodels2

Figure 2 – Best Demonstration Award at ICPE ’15

One area of future work in our partnership is to include the performance model (generation) capabilities into continuous delivery pipelines as outlined in [4]. This would allow to take the existing performance signatures provided by Dynatrace to the next level. Data collected during performance unit tests can then be used to generate performance models and thus predict performance for scenarios that are usually not testable for each build. Such capabilities to detect performance changes in each build for a variety of workloads and hardware environments are not available on the market yet.

The model generator is not publicly available by the PMG yet, but if you are interested in using these capabilities for your environment, we are happy to collaborate with you to adapt it to your needs! Just contact Andreas Brunnert.

[1] Brunnert, Andreas; Vögele, Christian and Krcmar, Helmut (2013): “Automatic Performance Model Generation for Java Enterprise Edition (EE) Applications.” In: Computer Performance Engineering 10th European Workshop on Performance Engineering, EPEW 2013, Venice, Italy, p. 74-88.

[2] Brunnert, Andreas; Neubig, Stefan and Krcmar, Helmut (2014): “Evaluating the Prediction Accuracy of Generated Performance Models in Up- and Downscaling Scenarios.” In: Symposium on Software Performance (SOSP), Stuttgart, Germany p. 113-130.

[3] Felix Willnecker, Andreas Brunnert, Wolfgang Gottesheim, Helmut Krcmar – Using Dynatrace Monitoring Data for Generating Performance Models of Java EE Applications In ICPE ’15 Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Pages 103-104 (Best Demonstration Award, https://dl.acm.org/doi/10.1145/2668930.2688061)

[4] Brunnert, Andreas and Krcmar, Helmut (2014): “Detecting Performance Change in Enterprise Application Versions Using Resource Profiles” In: Proceedings of the 8th International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2014), Bratislava, Slovakia