Continuous Integration has become a well established practice in todays modern software development. Especially for enterprise applications – that face the architectural challenge of dealing with a highly distributed and heterogeneous environments – its more necessary than ever to establish and enforce these kinds of practices.

Aren’t Automatic Builds, Unit- and Integration Tests enough?
How often have you been facing the situation that the latest integration build has passed all the tests but the first smaller load or stress test uncovered huge performance problems?
Wouldn’t it be better to not only test your code on functional correctness but also verify the performance?
Wouldn’t it be better to verify the latest code changes against well established architectural practices?
Wouldn’t it be better to trace performance values across your builds in order to react on degradations?
If you have answered at least one of the above questions with YES I encourage you to continue reading.

Continuous Performance Management (CPM)
Last weekend I had the chance to discuss this topic with several attendees and speakers at devLink.
Testing the performance & scalability of components as well as verifying your architecture in the early stages of your application development seemed to be the next logical step forward in order to create software that not only works – but works reliable and fast enough.

The goals of CPM are:

  • Constant Monitoring of Software Performance
  • Find Root Cause of Performance Variances before too much time passes
  • Fix Performance Issues before they are passed on to the next stage in the Lifecycle

Why do we need CPM?

  • Because GREEN Unit Test results don’t mean your components are really GREEN
  • It helps you to do Performance & Architecture Validation early in the Application Lifecycle

CPM with dynaTrace

For my sample application I’ve written several unit and web tests that verify if my application is functionally correct. I let those tests execute for each individual build that I run and it seems I am doing a good job – everything is GREEN on my machine.

In order to enforce CPM I use dynaTrace to achieve two goals:

  1. Verify that my unit tested components perform within expected thresholds, e.g.: a certain web service should not take longer than 500 milliseconds
  2. Verify that my unit tested components apply to well established architectural rules, e.g.: no component should execute more than 50 SQL Statements nor should there be the same statement executed multiple times

Set-Up
I use the dynaTrace MSBuild or NANT task to integrate dynaTrace into my Continuous Integration Process. I also create alerts for my performance thresholds on my web services and create additional alerts to enforce several architectural rules

Execution
For every build that I execute dynaTrace automatically raises incidents in case the performance degraded or if I do not meet my own set architectural standards. dynaTrace session are automatically stored for each build to allow me comparing my results across builds in order to react to performance degradation.

For every unit test I therefore get full visibilty into the code that is actually executed – allowing dynaTrace to root cause performance and architectural problems, e.g.: too many DB Statements. The following shows the PurePath for one of my unit tests.

Unit Test that uncovered performance & architectural problems

After I started fixing the problem in my application code I can now make use of the difference views of dynaTrace in order to analyse performance across my builds.

Performance Regression across Builds

Conclusion

dynaTrace is easy to integrate into your Continuous Integration Process in order to manage the performance of your components early in the Lifecycle. But Performance Management doesnt stop here. We can apply those principles across the Lifecycle to achieve Lifecycle Application Performance Management.