It is a fact that end user response time is critical for business success. The faster web pages are perceived, the longer users tend to stay on the page and therefore spend more money and drive business.In order to ensure that end user response times are acceptable at all times it is necessary to measure the time in the way the end user perceives performance. Measuring and monitoring your live system is important to identify problems early on before it affects too many end users. In order to make sure that web pages are fast from the start it is very important to constantly and continuously measure web page performance throughout the development phase and in testing. There are two questions that need to be answered

  • What is the time the user actually perceives as web response time?
  • How to measure it accurately and in an automated way?

What time to measure? Technical Response Time vs. Perceived Response Time

Technically – the response time of a web page is the time from the first byte sent by the browser to request the initial document until the last byte of all embedded objects (images, JavaScript files, style sheets, …) was received. Using network analysis tools like HTTP Watch or Fiddler one can visualize the individual downloads in a timeline view. The following illustration shows the network timeline when accessing Google Maps (http://maps.google.com) with an empty browser cache using Fiddler:

Network Timeline showing Network Requests but no Browser Activities
Network Timeline showing Network Requests but no Browser Activities

The initial document request returned after 1.6s. Embedded objects get downloaded after the initial document was retrieved. It turns out there are 2 additional HTML documents, a list of images and some JavaScript files. After 5 seconds (when main.js was downloaded) we see a small gap before the remaining requests are downloaded. We can assume that the gap represents JavaScript execution time that delayed loading some other objects– but we cannot be fully sure about that.

From this analysis it’s hard to tell what the perceived end user response time really is. Is it 1.6 seconds because that is the time when the browser could already start rendering the initial content of the HTML document? Or is it roughly 5 seconds when the first batch of embedded objects was fully downloaded? Or might it be 8 seconds – because that is the time till the last request was completed? Or is the truth somewhere in between?

There is more than meets the “HTTP Traffic” Eye

The browser does much more than just download resources from the server. The DOM (Document Object Model) is built and maintained for the downloaded document. Styles are applied to DOM Elements based on the definition in Style Sheets. JavaScript gets executed at different points in time triggered by certain events, e.g.: onload, onclick, …. The DOM and all its containing images are rendered to the screen.

Using a tool like dynaTrace AJAX Edition we get all this additional activity information showing us where and when additional time is spent in the browser for JavaScript execution, Rendering or waiting for asynchronous network requests. We also see page events like onLoad or onError:

Timeline of all Browser Activities
Timeline of all Browser Activities

Looking at this timeline view of the same Google Maps request as before now tells us that the browser started rendering the initial HTML document after 2 seconds. Throughout the download process of the embedded objects the browser rendered additional content. The onLoad event was triggered after 4.8 seconds. This is the time when the browser completed building the initial DOM of the web page including all referenced objects (images, css, …). The execution of main.js – which was downloaded as last the JavaScript file – caused roughly 2 seconds of JavaScript execution time, causing high CPU on the browser, additional network downloads and DOM manipulations. The High CPU utilization is an indication of the browser not being very responsive to user input via mouse or keyboard as JavaScript almost exclusively consumed the processor. DOM Manipulations executed by JavaScript got rendered after JavaScript execution was completed (after 7.5s and 8s).

So what is the perceived end user performance?

I believe there are different stages of perceived performance and perceived response time.

The First Impression of speed is the time it takes to see something in the browsers window (Time To First Visual). We can measure that by looking at the first Rendering (Drawing) activity. Get a detailed description about Browser Rendering and the inner workings the Rendering Engine at Alois’s blog entry about Understanding Browser Rendering.

The Second Impression is when the initial page is fully loaded (Time To OnLoad). This can be measured by looking at the onLoad event which is triggered by the browser when the DOM is fully loaded meaning that the initial document and all embedded objects are loaded.

The Third Impression is when the web site actually becomes interactive for the user (Time To Interactivity). Heavy JavaScript execution that manipulates the DOM causes the web page to become non interactive for the end user. This can very often be seen when expensive CSS Selector Lookups (check out the blogs about jQuery and Prototype CSS Selector Performance) are used or when using dynamic elements like JavaScript Menus (check out the blog about dynamice JavaScript menus).

Let’s look at a second example and identify the different impression stages. The following image shows a page request to a product page on a very popular online retail store:

3 Impression Phases
3 Impression Phases

The initial page content is downloaded rather quickly and rendered to the screen in the first second (First Impression). It takes a total of about 3 seconds for some of the initial images to load that make up the pages initial content (Second Impression). Heavy JavaScript that manipulates the DOM causes the page to be non responsive to the end user for about 10 seconds also delaying the onLoad event where the page delay loads most of the images. In this case the user sees some of the content early on (mostly text from the initial HTML) – but then needs to wait another 10 seconds till the remaining images get delay loaded and rendered by the browser (Third Impression). Due to the high CPU usage and DOM manipulations the page is also not very interactive causing a bad end user perception of the pages performance.

How to measure? Stop Watch Measuring vs. Tool Supported Measuring

The idea for this blog post came from talking with performance testing engineers at on of our clients. I introduced them to the dynaTrace AJAX Edition and was wondering about a small little gadget they had on their table: a Stop-Watch.

Their task was to measure end-user response time for every build of their new web-site in order to verify if the times are within defined performance thresholds and in order to identify regressions from build to build. They used the Stop-Watch to actually measure the time it took to load each single page and to measure the time till the page was responsive. The “manually” measured numbers were put into a spreadsheet which allowed them to verify their performance values.

Do you see the problems in this approach?

Not only is this method of measuring time very inaccurate – especially when we talk about measuring precise timings in tenths of seconds. Every performance engineer also has a slightly different perception of what it means for the site to be interactive. It also involves additional manual effort as the timing can only be taken during manual tests.

Automate measuring and measure accurately

The solution to this problem is rather easy. With tools like dynaTrace AJAX Edition we capture the performance measures like JavaScript execution, Rendering Time, CPU Utilization, Asynchronous Requests and Network Requests. Not only is this possible for manual tests but also works in an automated test environment. Letting a tool do the job eliminates the inaccuracy of manual time taking and subjective perception of performance.

When using the dynaTrace AJAX Edition as seen on the examples above all performance-relevant browser activities are automatically captured and enable us to determine the time of the 3 Impression Stages. The blog article “Automate Testing with Watir” shows how to use dynaTrace AJAX Edition in combination with automated testing tools. The tool also provides the ability to export captured data to XML or spreadsheet applications like Excel – supporting the use case of automated regression analysis across different web site versions/builds.

Conclusion

Using tools like dynaTrace AJAX Edition for Internet Explorer, YSlow or PageSpeed for FireFox or DevTools for Chrome enables automating web site performance measuring in manual and automated test environments. Continuously measuring web site performance in the browser allows you to always focus on end user performance which in the end determines how successful your website will be.