As the Winter Olympics are a hot topic right now I checked out vancouver2010.com to see if they have any potential to improve their web site performance. It seems I found a perfect candidate for this 5 minute guide 🙂
Minute 1: Record your Dynatrace AJAX Session
Before I start recording a session I always turn on argument capturing via the preferences dialog:
Now its time to start tracing. I executed the following scenario:
1. went to http://vancouver2010.com
2. click on Alpine skiing
3. click on Schedules & Results
4. click on the results of the February 17th race (that’s where we Austrians actually made it on the podium)
Minute 2: Identify poorly performing pages
Here is what we can see
2. The first page has a large amount of Rendering Time – that is time spent in the browser’s rendering engine
3. Page 2 and 4 have page load times (time till the onLoad event was triggered) of more than 5 seconds!!
4. Page 3 has a very high Network Time although it doesn’t have a very bad page load time. This means that we have content that was loaded after the onLoad
Minute 3: Analyze Timeline of slowest Page
Here is what I can read from this timeline graph (moving the mouse over these blocks gives me a tooltip with timing and context information):
2. the script FB.share takes 792 ms when it gets loaded
3. an XHR Request at the very beginning takes 820ms
4. we have about 80 images all coming from the same domain – this could be improved by using multiple domains
5. we have calls to external apps like facebook, google ads or google analytics
Minute 4: Identify poorly performing CSS Selectors
I highlighted those calls that have a major impact on the performance of this event handler. You can see that most of the time is actually spent in the $ methods that is used to look up elements. Another thing that I can see is that they change the class name of the body to “en” which takes 550ms to execute.
The problem here is easy to explain. The site makes heavy use of the CSS Selectors to look up elements by class name. This type of lookup is not natively supported by Internet Explorer and therefore jQuery has to iterate through the whole DOM to find those elements. A better solution would be to use unique IDs – or at least add the tag name to the selector string – this also helps jQuery as it first finds all elements by tag name (which is natively implemented and therefore rather fast) and then only has to iterate through these elements. So instead of an average lookup time of between 50ms and 368ms this can be brought down to 5-10ms -> a nice performance boost – eh? 🙂
Minute 5: Identify network bottlenecks
In the timeline I saw many image requests coming from the same domain. As most browsers have a physical network connection limitation per domain (e.g.: IE7 uses 2) the browsers can only download so many images in parallel. All other images have to wait for a physical connection to become available. Drilling into the Network View for page 4 I can see all these 70+ images and how they “have to wait” to become downloaded. Once these images are cached this problem is no longer such a big deal – but for first-time visitors it is definitely slowing down the page:
The solution for this problem is using the concept of domain sharding. Using 2 domains to host the images allows the browser to use twice as many physical connections to download more images in parallel. This will speed up page the download of those images by 50%.
Feedback on this is always welcome. I am sure you have your own little tricks and processes to identify performance problems of your web sites. Feel free to share it with us.