Because of the efforts of people like Steve Souders, John Resig, Sergey Chernyshev, Paul Irish, … a lot has changed when it comes to optimizing web site performance. Browser and Application Performance Vendors built tools to make Web Performance Optimization easier than ever before. Web Frameworks are optimized to generate better web pages.
However, looking at the following chart reminds us that it best practices, conferences and tools to alone didn’t succeed and building optimized web sites is getting even harder. As can be seen, the main problem with modern web sites is the growing number of resources, the size of the content, and the declining user experience that results from the first two items:
In case your testing efforts are not focusing on these metrics then read this graph as: “Let’s test and optimize our page size!”
Example #1: Too many resources
Have you ever analyzed the pages of your application using tools such as dynaTrace AJAX Edition, YSlow, Google Page Speed or SpeedOfTheWeb? The following example is taken from a blog I did last year analyzing page performance of retail web sites. It shows how many resources are loaded for a single page and how loading that many resources from a large number of different domains which comes with the additional cost of Connect and DNS time per domain.
When we compare this to another online retail store, it becomes clear that there is a lot that can be achieved by optimizing resources loaded on the initial page visit.
Good news is that testing and verifying these KPIs can be automated and integrated in your continuous performance test process. This avoids manually looking over results of every single tested page of your application. Here is what it takes to implement it:
- Browser Diagnostic tools that analyze tested pages based on these performance metrics
- A Smart Performance Repository that automatically alerts on performance regressions
The following screenshot shows what continuous testing on these KPIs looks like and how to identify regressions immediately by monitoring these metrics for every page and for every test:
Example #2: Heavy Third Party Content
As already highlighted in the first example it is not only your own content that can blow up your pages. You probably rely on 3rd party content such as Ads, Social Platform Widges or Map Services. If that is the case you want to make sure that
- Your pages only load the 3rd party content that is really needed, e.g: global includes in web projects may load 3rd party content on pages where not necessary
- You include this 3rd party content in an optimized way, e.g.: why provide an interactive world map when all you want to do is to show a static image of your address?
Klaus Enzenhofer wrote a great blog about Third Party Content Management applied – where he highlights the impact of 3rd party content and also how to test it. The key message is that you need to understand what type of 3rd party content you really need, how much impact it has on your page load times, and how to optimize it. Klaus did some testing with Standard Google Maps vs. Static Maps and how this one change can optimize your page load time while still relying on 3rd party content. The following table shows the difference when using these two ways of including the Maps Services on your page:
|KPI||Standard Google Maps||Static Google Maps||Difference in %|
|First Impression Time||493 ms||324 ms||-34%|
|Onload Time||473 ms||368ms||-22%|
|Total Load Time||1801 ms||700 ms||-61%|
|Number of Domains||6||1||-83%|
|Number of Resources||43||2||-95%|
|Total Pagesize||636 Kb||77 Kb||-88%|
For measuring the overhead of 3rd party content simply take the same tools as explained in the previous example. The following shows the timeline view of dynaTrace where the overhead of Facebook and Google+ are easy to spot:
- Make sure that these Best Practices are enforced early in the development cycle as well as throughout your performance testing
- Monitor the critical Key Performance Indicators to ensure that your web sites don’t end up super-sized
- Test using real browsers and don’t exclusively rely on protocol-based performance testing