In an earlier post I already discussed several approaches towards end-user experience (or performance) monitoring including their pros and cons. In this post I will present a simple real-world sample which shows the limits of performance traceability in AJAX applications.

As I don’t like Hello World samples, I thought I’d rather build something a bit more useful. The sample uses the Twitter API to search for keywords. The search itself is triggered by typing into a textbox. While the sample isn’t spectacular from a technical perspective, I will make it more interesting by adding some “technical salt” – rather than sugar.

Building the Sample Page

So let us start looking at the code. Below you find the skeleton of our page. The code is straightforward. We have a textbox and a table. I am using jQuery for convenience reasons here – especially because of some innerHTML bugs of IE. However the sample does not require it.

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
  <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
  <title>This is the untraceable AJAX Demo</title>
  <script type="text/javascript" src="./jquery.js" /></script>
  <script type="text/javascript">
    // we add more code later here …
  <input type="text" id="search"></input>
  <br />
  <table id="myTable" />

The textbox has a keyUp event listener which invokes the following code. So instead of directly triggering an XHR request, we only write the value of the textbox to a global variable. We do this to reduce the number of network requests.

var value = "";

$(function (){
  $('#search').keyup(function (){
    value = $(this).val();

Then we define another method which takes the variable value and dynamically creates a script tag with the entered search term as a query parameter. The callback parameter specifies which function will be called as part of the returned JSONP. This is really straightforward thanks to the Twitter API. This method is called every 1.5 seconds using the following setInterval call.

function queryTwitter (){
  if(value != ""){
    var head = document.getElementsByTagName('head')[0];
    var elem = document.createElement('script');
    elem.setAttribute('src', '' + value +"'");
    value = "";

setInterval (queryTwitter, 1500);

When the script is returned the method below is invoked. It simply clears the table and then adds a row containing an image and text for each tweet.

function writeToTable (results){
  var table = $('#myTable');
  var tweets = results.results;
  for (var i in tweets){
    table.append('<tr><td><img src="' + tweets[i].profile_image_url + '"/></td><td><b>' +
    tweets[i].from_user + ' says: </b> <i>'+ tweets[i].text + '</i></td></tr>');

You can try the service here if you want. That’s all the code for our sample. Really simple isn’t it? Now let’s come to the question we want to answer. What we want to know is, how long does it take from entering the text until the result is shown on the screen. This is simply the performance perceived by the end-user.

While this question looks simple at first sight it is really tricky to answer. I must admit, that I built the example in a way that it is difficult to trace :-).  However I am using only common AJAX techniques. Let’s discuss the problems in detail:

Using a Variable for Execution Delay

The usage of a variable for delayed execution causes some problems. There is no direct link between the JavaScript code executed in the event handler and the code executed by the timer. We will not be able to do a single JavaScript trace. Respectively these calls have no direct relation.

If we have access to the internals of the framework we can overcome this problem by using explicit markers. I am using the dt_addMark JavaScript function which creates a marker in the free dynaTrace AJAX Edition. I am doing this for the event handler as well as the queryTwitter method. We can now correlate the click to the timer method invocation.

Tracing Asynchronous Execution using Markers
Tracing Asynchronous Execution using Markers

Using a Script Tag for Populating the Table

The next problem we face is that we use a script tag rather than an XHR request to populate the table. Therefore we again have no direct link between the queryTwitter method and the execution of the script block. However we can find the execution of the script block in the PurePath view.

Relating Script Execution in the PurePath View
Relating Script Execution in the PurePath View

Using this information we find the respective script execution and can calculate the time from the initial typing to the populated table. Well, we have to be more precise. We know the time until which the elements were added to the DOM. In order to understand when the user sees the data we have to master another challenge

Relating Rendering to JavaScript Execution

This is the trickiest part of our analysis. We now need to correlate JavaScript execution to the rendering caused. I’ve already explained how rendering in Internet Explorer works in another post. So, in our cases rendering will be performed after inserting the elements into the DOM. In dynaTrace AJAX Edition we can identify this by looking at the JavaScript execution and search for nodes saying Rendering (Scheduling Layout Task …). We then search further down the PurePaths until we find the related – by number – layouting task and the following drawing task. Doing this we now know when the user sees the first information after typing in a search string.


So we managed to get all the information after all. Why did I call this untraceable? Well, first we require knowledge of the internal event processing and needed to add custom instrumentation to link the event listener code to the actual worker code. While this was easy in this specific case it will get a lot harder if you try to do the same for a full-fledged JavaScript framework.

Secondly we had to do a lot of manual correlation. dynaTrace AJAX Edition as well as tools like SpeedTracer are of great help here as they provide all required information – including rendering times. Nevertheless this analysis required a thorough understanding of JavaScript techniques. Additionally we have to keep in mind that we were doing this in a lab environment where we had full freedom regarding how to trace our code. The story will be a very different one as soon as we try to collect the same information from end-users who have no plug-in installed and where the complete correlation must be performed automatically. In this case we will not be able to trace end-user performance.


So, what’s the conclusion? Analyzing the performance of JavaScript execution easily becomes a complex task which requires proper tooling and a lot of human expertise. Given these preconditions measuring end-user performance is doable. However, as soon as we move to real end-users the task becomes nearly impossible. Current approaches around end-user performance management still have to improve to provide the insights needed to measure accurate end-user performance. This is true for browser, frameworks and analysis toolkits.

Challenge Me ; -)

I tried my best to analyze the sample and give an accurate measurement of end-user performance using dynaTrace AJAX Edition. However I am interested in other approaches towards measuring end-user performance for this sample.

This post is part of the dynaTrace 2010 Application Performance Almanac