Web Performance Optimization Use Cases – Part 2 Optimization

In the last post I discussed benchmarking as the first use case for Web Performance Optimization (WPO). This time I will take a closer look at optimization.

After we have discovered how our site behaves compared to our competition – or any reference we might want to benchmark against – we want to learn how to improve our user experience. We will therefore have a look at different approaches towards optimization.

Best Practice Based Optimization

Fortunately there are lots of best practices on how to optimize the performance of web applications. Yahoo and Google – and also Dynatrace – provide great information against which rules you should check your web site These rules are also supported by tools like PageSpeed and YSlow and dynaTrace Ajax Edtion. For an Ajax and JavaScript heavy website we additionally want to verify our web pages against server-side and JavaScript execution best practices.

The number of characteristics to check against can become rather huge. Try showslow or webpagetest to get an impression of all the rules which are checked by different tools. At the beginning this might be a bit much to consume. That’s why we decided to group those best practices into four major areas as shown below

Performance Overview
Performance Overview

Let’s have a look at these different areas briefly:

  • Caching checks for all caching-related issues like expires headers etc.
  • Network checks for HTTP errors and resources that can be merged.
  • Serve-Side performance checks for slow server response of dynamic content.
  • JavaScript execution checks for long running JavaScript code, bad CSS selector performance and long running XHR calls.

Grouping best practices into these categories will help you to isolate the most important areas for optimization without having to look at all the details yet. Your JavaScript code might be doing fine; however, caching headers might not be set properly.

After you know where and in which area you should optimize, you get the details on how to improve the performance and what the estimated performance gain will be

CSS File Check with Optimization Impact
CSS File Check with Optimization Impact

A uniqueness of Dynatrace Ajax Edition is the JavaScript analysis. All JavaScript executions are automatically pre-analyzed for slow running code or inefficient selectors.

JavaScript Optimization View
JavaScript Optimization View

As you can see this will immediately point you to critical JavaScript code and provide information on how to optimize the code. For us this turned out to be a huge time saver as it does no longer requires us to analyze all the JavaScript code by hand.

KPI driven Optimization

We found that analyzing code against best practices is helpful to get started. However you may also want to often optimize very specific runtime characteristics of your web page. Especially if you are going to continuously track performance characteristic changes of your web site, you would rather want to work with KPIs – or hard facts – rather than grades only.

KPI-driven analysis provides a holistic view on all metrics describing the performance of a web site. We summarized all important web performance metrics so you get a detailed understanding of the performance in a single view as shown below.

KPI Optimization Dashboard
KPI Optimization Dashboard

We also found it extremely useful to get metrics for different content types. Ajax Edition shows number of requests, cached versus un-cached content by content type, etc. helping you answer questions like: “How many images are loaded and what portion of them is cached?”

Equally interesting is to look at network performance metrics for different domains. This provides good insight into how content from different domains impact user experience. For each domain we list the number of requests, download size and several timings including an approximated download speed per domain. Seeing a high wait time combined with a large number of resources will immediately point out the impact of connection limits and resource count for a domain. In many cases this eliminates the need to extract this information from a waterfall chart.

Which view you prefer will depend on the actual use case and your personal preferences. Our recommendation is to start analyzing a web page against best practices. This provides an easy to work-with first indicator on optimization areas. Once you have discovered where you want to improve, you should switch to KPI-based analysis. This will allow you to easily track and quantify improvements and also see side effects of optimizations.

If you are planning to look at these metrics continuously, you will find the KPI-based view even more useful as it also creates a common language to communicate about the performance of your site.

The logical next step now is to automate KPI tracking and analysis. This will be the topic of the next post in this series.

Alois is Chief Technology Strategist of Dynatrace. He is fanatic about monitoring, DevOps and application performance. He spent most of his professional career in building monitoring tools and speeding up applications. He is a regular conference speaker, blogger, book author and Sushi maniac.