Many Web Sites that use JavaScript frameworks to make the site more interactive and more appealing to the end user suffer from poor performance. Over the past couple of months I’ve been contacted by users of our FREE dynaTrace AJAX Edition asking me to help them analyze their problems. In doing so, I’ve developed a standard approach in order to get to a high-level analysis result in 5 minutes.

As the Winter Olympics are a hot topic right now I checked out to see if they have any potential to improve their web site performance. It seems I found a perfect candidate for this 5 minute guide 🙂

Minute 1: Record your dynaTrace AJAX Session

Before I start recording a session I always turn on argument capturing via the preferences dialog:

Turn on Argument Capuring in the Preferences Dialog
Turn on Argument Capuring in the Preferences Dialog

The reason I do that is because I want to see the CSS Selectors passed to the $ or $$ lookup functions from various JavaScript frameworks like jQuery or Prototype. The main problem I’ve identified in my work are CSS Selectors per className that cause huge overhead on pages with many DOM elements. I wrote two blogs about the performance impact of CSS Selectors in jQuery and Prototype.

Now its time to start tracing. I executed the following scenario:
1. went to
2. click on Alpine skiing
3. click on Schedules & Results
4. click on the results of the February 17th race (that’s where we Austrians actually made it on the podium)

Minute 2: Identify poorly performing pages

After closing the browser, I return to dynaTrace AJAX Edition and look at the Summary View to analyze the individual page load times and identify whether there is a lot of JavaScript, Rendering or Network time involved. Let’s see what we got here:

Identifying HotSpots on every page
Identifying HotSpots on every page

Here is what we can see
1. Across the board we have high JavaScript execution. The last page (schedule and results) tops it with almost 7 seconds in pure JavaScript
2. The first page has a large amount of Rendering Time – that is time spent in the browser’s rendering engine
3. Page 2 and 4 have page load times (time till the onLoad event was triggered) of more than 5 seconds!!
4. Page 3 has a very high Network Time although it doesn’t have a very bad page load time. This means that we have content that was loaded after the onLoad

Minute 3: Analyze Timeline of slowest Page

I pick page 4 as we see a very high Page Load and very high JavaScript time. I drill down to the timeline view and analyze the page characteristics:

Where is the time spent on this page?
Where is the time spent on this page?

Here is what I can read from this timeline graph (moving the mouse over these blocks gives me a tooltip with timing and context information):
1. the readystatechangehandler takes 5.6 seconds in JavaScript. This handler is used by jQuery and calls all registered load handlers
2. the script FB.share takes 792 ms when it gets loaded
3. an XHR Request at the very beginning takes 820ms
4. we have about 80 images all coming from the same domain – this could be improved by using multiple domains
5. we have calls to external apps like facebook, google ads or google analytics

Minute 4: Identify poorly performing CSS Selectors

The biggest block is the JavaScript executed in the readystatechangehandler. I double click on it and end up in the PurePath view showing me the JavaScript trace of this event handler. I navigate to the actual handler implementation which gets called by jQuery. I expand the handler to see the methods it calls and which one consume the most time. It is not surprising to see a lot of jQuery Selector methods in there using a CSS className to identify the element:

PurePath View showing HotSpots in the onLoad event handlers
PurePath View showing HotSpots in the onLoad event handlers

I highlighted those calls that have a major impact on the performance of this event handler. You can see that most of the time is actually spent in the $ methods that is used to look up elements. Another thing that I can see is that they change the class name of the body to “en” which takes 550ms to execute.

As I am sure there are tons of calls to jQuery Selector Lookups in that JavaScript handler as well as in all other JavaScript handlers on the website I open up the HotSpot view. The HotSpot view shows me the JavaScript, DOM Access and Rendering Hotspots across all pages. I am interested in the $ methods only. In the HotSpot view I therefore filter for “$(” and also filter to only show the DOM API (we account the $ method to the DOM API and not to jQuery). Here is what I get after sorting the table by the Total Sum column:

HotSpot View showing all jQuery CSS Selectors and their performance overhead
HotSpot View showing all jQuery CSS Selectors and their performance overhead

The problem here is easy to explain. The site makes heavy use of the CSS Selectors to look up elements by class name. This type of lookup is not natively supported by Internet Explorer and therefore jQuery has to iterate through the whole DOM to find those elements. A better solution would be to use unique IDs – or at least add the tag name to the selector string – this also helps jQuery as it first finds all elements by tag name (which is natively implemented and therefore rather fast) and then only has to iterate through these elements. So instead of an average lookup time of between 50ms and 368ms this can be brought down to 5-10ms -> a nice performance boost – eh? 🙂

Minute 5: Identify network bottlenecks

In the timeline I saw many image requests coming from the same domain. As most browsers have a physical network connection limitation per domain (e.g.: IE7 uses 2) the browsers can only download so many images in parallel. All other images have to wait for a physical connection to become available. Drilling into the Network View for page 4 I can see all these 70+ images and how they “have to wait” to become downloaded. Once these images are cached this problem is no longer such a big deal – but for first-time visitors it is definitely slowing down the page:

Network View showing wating times for Images
Network View showing waiting times for Images

The solution for this problem is using the concept of domain sharding. Using 2 domains to host the images allows the browser to use twice as many physical connections to download more images in parallel. This will speed up page the download of those images by 50%.


It is easy to analyze the performance hotspots of any web site out there. This is my approach to identify the most common problems that I’ve seen in my work. Besides the problems with CSS Selectors and Network Requests we see problems with poorly performing JavaScript routines (very often from 3rd party libraries), too many JavaScript files on the page, too many XHR (XmlHttpRequests) Requests to the server and slow responses from the server of those XHR Requests. Especially for that last piece we then use our End-To-End Monitoring Solution by integrating the data captured with dynaTrace AJAX Edition with the server-side PurePath data captured with dynaTrace CAPM. Also – check out my blog about why end-to-end performance analysis is important and how to do it.

Feedback on this is always welcome. I am sure you have your own little tricks and processes to identify performance problems of your web sites. Feel free to share it with us.