Web 2.0 Agnostic
“Traditional” Web Pages were page based. That means that every click on a link usually causes a full page reload of a new URL. An example is a traditional eCommerce Site where you start on the Home Page and then click through the individual Product categories. Every click results in a new page request giving you the products of this particular category with the URL reflecting the actual category you just clicked. When optimizing page load times for such page-based applications, tools like YSlow, PageSpeed and dynaTrace AJAX Edition are perfect as these tools analyze activities per visited URL.
More Key Performance Indicators
The measures are calculated by Timer Name (as discussed in the previous section). This allows you to track how certain features of your web application deal with things like the number of downloaded resources over time. The following screenshot shows us several key performance indicators that dynaTrace tracks across test runs:
It is also interesting to see that dynaTrace automatically calculates how volatile certain performance metrics are. A volatile measure indicates frequent change of the tested application without adhering to common best practices (such as keeping the number of CSS files low). It can also indicate that the used test script is not producing constant results. In this case we have to make sure that we have stable tests because only stable tests allow us to automate performance analysis.
Automated Regression Analysis
Getting more performance indicators as described in the previous section is great – but – nobody wants to manually look at metrics of hundreds of tests individually to figure out if any of these metrics indicate a regression. dynaTrace automates this task for us.
For every test run dynaTrace analyzes every captured metric (number of resources, number of cached objects, number of un-cached objects, number of external domains, …) and compares it with the results of previous test runs. When a measure falls outside the expected value range dynaTrace automatically triggers an incident. The expected value range is automatically calculated by looking at the recent test results. An incident can send out an email notification to the assigned developer or notify the test manager in a dashboard about all tests that show a regression. The following screenshot shows how dynaTrace verifies every single measure against the calculated expected value range:
To get a better overview of which tests seem to have a problem we can also access this data through a REST interface that dynaTrace provides. The most interesting information is whether there was a change in the last test run compared to the previous. This information can be queried as XML, CSV, PDF or HTML. The following shows the HTML version of this report available at all times:
You may notice that this report not only includes our Browser Tests on the Google Search Page. It also includes results of Unit Tests. dynaTrace supports analyzing Java and .NET Unit Tests. Instead of looking at the number of resource downloads we look at number of database statements, number of exceptions or the execution time of certain methods. Tracking these metrics per unit test also allows us to identify regressions early on. A good example would be the number of SQL Statements executed for a particular feature. If that changes significantly from one build to the next the developer probably accidentally introduced a regression that should be fixed right away.
End-to-End Performance Analysis
Seeing the full End-to-End Trace including method arguments, return values, SQL Statements, Exceptions, Log Messages, etc. allows us to better understand where time is spent when users interact with the web site. In the end it is about optimizing the performance of a web site when the user interacts with it by executing certain actions. Whether you have your own Web Framework or use Frameworks such as GWT, JSF, ASP.NET or Spring, you need to understand what is really going on when pages are rendered or when user actions get executed.
dynaTrace not only allows us to see the full End-to-End Trace which is great for diagnostics. dynaTrace also calculates performance metrics such as number of database statements, exceptions or how long certain remoting calls took. These metrics are calculated in the same way as explained in the sections above. This allows you to keep track of your performance metrics in an automated test environment with dynaTrace automatically telling you if there are any regressions on either the browser or server-side.
Want to know more about the Premium Features of dynaTrace?
If you are serious about automating performance analysis on both the Browser and Server, or if you struggle with the limitations of tools such as YSlow, PageSpeed and dynaTrace AJAX Edition because of their Page-Based Analysis approach, then check out the Premium Extensions of dynaTrace.