Last week I blogged about the 6 Steps to identify the major web site performance problems on sites like masters.com. Unfortunately for me I did the analysis 3 days before they re-launched their website. The site looks really nice now – great job.
They fixed some of the problems I highlighted in the blog and also follow more of the best practices outlined by Google and Yahoo. In order to give them credit for the work they put in I ran another analysis on the new site. It should be interesting for everybody to see what they have changed, what they haven’t changed, what new problems they introduced and how this all impacts the web site performance to the end user.
Merging files to speed up resource loading?
First of all it is a bit hard to compare two individual pages because the web site changed in terms of its structure. The first page on the old site showed different content than on the new site. Overall there are however definitely fewer images. There are even more options to bring this number down by using CSS Sprites.
The big problem I see here – even though they managed to avoid many roundtrips – is that every user must download ~300kb of HTML for every page request. In times of high-speed internet this might not seem like a problem but a) not everybody has this kind of internet speed and b) their servers are pounded with additional load and network bandwidth to the servers might become a problem with increasing load.
Expensive and too many redirects
I often enter web site Url’s without the www in the beginning, e.g.: masters.com instead of www.masters.com. Websites usually redirect me to the correct page and I save 3 letters to type :-). Redirects like this are additional network roundtrips which we basically want to avoid in order to speed up the site. Looking at the network activity I can not only see that the initial redirect takes a very long time (4s in my case), it redirects me to another redirect adding another 570ms before I finally end up on the index.html page. The following image shows the 3 network requests when entering the URL without the www domain prefix:
What exactly does the network activity tell us in this case?
Initial request takes 4.2 seconds (including DNS lookup, Connect- and Server-Time) and redirects me to www.masters.com. The fact that I am in Europe right now probably contributes a bit to the 4.2 seconds as I assume that the web-server sits somewhere in the US.
IE7 bug causes wrong browser detection and fallback to IE6
When I analyze a web site I always start by looking at the Summary View for my recorded session. I saw something that made me curious:
The resource chart shows me which resources are downloaded from the web and how many are retrieved from the local browser cache. There are 2468 Text resources that were accessed from my local browser cache. This made me curious as it was definitely something you don’t see that often. A double-click on the bar in this chart brings me to the Network View showing me all these requests:
The single file that gets downloaded once, cached and then accessed from the cache 2468 times is an IE Html Component that provides a fix for a .png issue. This file makes sense – but only if I would run on IE6 – which I don’t. I am using IE7 and was therefore wondering why the web site downloads this file and reference it so many times. A closer look into the HTML files revealed the problem. The site does a browser check by querying the User-Agent property of the Navigator object. This is a common practice to identify the current browser version. Unfortunately there is a bug in IE7 that returns a wrong User-Agent in case the User-Agent string is longer than 256 characters. Therefore the following code gets executed for all the .png files that are on every single page causing the htc file to be loaded:
Caching files too short
Caching files in the local browser cache is a good thing. Resources that hardly ever change can be cached locally and the browser doesn’t need to request these files every time the user visits the page. As outlined in my first analysis in Step 4: How good is my 2nd visit experience? it is important to use the correct cache control settings. It seems that the re-launched site still has the same problem as the old one. Images have a cache setting of only a couple of minutes. The following screenshot shows the network requests of a 2nd visit to their page after I did my first walkthrough about 15 minutes ago:
I recorded this session on April 2nd 2010, 19:27 GMT. The Expires header is set to ~13 minutes only. Looking at these images I doubt that they will change that frequently. The problem here is that the browser needs to send a request for all these images. Even though the response just says “NOT MODIFIED SINCE” – we have many roundtrips that can all be avoided. Do the math and build the sum of the Total column in the screenshot above – this is the time that can be saved with correct cache settings.
To solve this problem it is recommended to use far-future expires headers. Check out the RFC for Cache Control as well as the Best Practices on this topic by Yahoo and Google. Unless there is a reason to constantly download these files (in case they really frequently change) it is better to leverage the local browser cache and reduce download times.
For each of the 99 players that were returned by the XHR call, the code in the each loop uses the $(this) to get a jQuery object for the current XML record. One call to this method would be enough – and would therefore save several hundred calls to this method as you can simply store it in a variable and use this variable instead.
Another common problem – and I’ve blogged about that many times – are expensive CSS Selectors. Check my recent blog about 101 on jQuery Selector Performance to get background information on this problem. The following screenshot shows me the top CSS Selectors used on masters.com:
Especially those selectors by class name, e.g.: “.middle703″, .”middle346”, … are very expensive as Internet Explorer doesn’t provide a native implementation to query elements by class name. Frameworks like jQuery therefore need to iterate through the whole DOM and find these elements by looking at each class name of each DOM element. Just taking the top 3 selectors on this page – which total 1.3s – show how much performance improvements is possible. By using a different way to identify those elements – e.g.: by unique ID – this time can be reduced by 90-95%. It is also questionable why the top two queries are called 32 times. I am sure we can cache the result of the first lookup and reuse it -> saves even more time.
I was honored to do a DevOps Handbook Lessons Learned webinar with the DevOps “Godfather” Gene Kim earlier this month. In preparation for it I not only started reading his new DevOps Handbook, but also revisited the main messages of his previous DevOps book – The Phoenix Project. Gene (and co-authors) talk about the three ways which represent a maturity path of organizations from a rigid, slow and manual value chain, towards … read more
I’ve been offering my help in analyzing performance data for quite a while now. Most of the time when analyzing PurePaths, load testing outputs or production log files, I find very similar problem patterns. This fact inspired us to automate problem detection in Dynatrace AppMon 6.5. Even though I think we cover a big part of common patterns, I am always on the lookout for something new – something I … read more