How better Caching helps Frankfurt's Airport Website to handle additional load caused by the Volcano

Along with so many others I am stranded in Europe waiting for my flight back to the United States right now. The Volcano not only impacts flights across Europe but also impacts web sites of airports, airlines and travel agencies around the world. Checking my flight status on Sunday was almost impossible. The website of Germans largest airport – Frankfurt am Main – was hardly reachable. No wonder as I assume that their page just got hammered by thousands of additional page requests of frustrated travellers. Now it’s Tuesday and the website is back to “almost acceptable” response times. Time for me to analyze the current web site as I’ve done with others such as vancouver2010, or

Status Quo: Too many resources and wrong cache settings

Using the Free Dynatrace AJAX Edition and browsing to shows me what is going-on on this home page. The Resource Graph shows me the number of JavaScript, CSS, Image files, … On the home page we find 97 images, 40 JavaScript and 22 Style Sheet files. I’ve browsed to the homepage before – that’s why some of these resources show up as coming from the Cache. However – as we will see in a bit – the current Cache settings still require my browser to send a request.

Too many resources on the website (97 images, 40 js files, 22 css files, ...)
Too many resources on the website (97 images, 40 js files, 22 css files, …)

Drilling to the TimeLine View shows where these resources are downloaded from and how the impact page load time. Like many similar web-sites, this content is delivered from many different domains. In this case we see 28 domains delivering Ad’s, Banners or to provide services such as web user tracking. We see that it takes 11 seconds for the onLoad event to be triggered – that is when all initial content is downloaded (HTML + referenced objects). Most of the download time is spent on content delivered by Most of the images, JavaScript and CSS files are downloaded from this domain.

Due to the physical network connection limitation the browser only uses 2 physical connections to download these resources resulting in ~7 seconds of pure download time from this domain. Using multiple domains – a technique called Domain Sharding – allows the browser to use more physical connections to download these resources in parallel. This ultimately results in faster page-load time. The other point worth noting is the number of files that are downloaded. 125 resources are downloaded from the main domain until the onLoad event is triggered. By merging JavaScript and CSS Files and by Spriting image files (where possible) this number can be reduced – big time – resulting in fewer round trips and therefore speeding up page load time as well.

Content is delivered by 28 web domains. Most images are severed from the domain slowing page load load
Content is delivered by 28 web domains. Most images are severed from the domain slowing page load load

My next step is to take a closer look at caching. Browsers can cache content such as static images, styles sheets or JavaScript files. This makes perfect sense for content that doesn’t change frequently. In order to verify correct cache settings I record another session by browsing to the home page a second time. If caching is configured correctly my browser should not retrieve certain resources from the server but just take it from the local browser cache. The Summary View looks good – seems that most of the resources are actually retrieved from the Cache:

Most of the images, css and javascript files are now taken from the browser cache
Most of the images, css and JavaScript files are now taken from the browser cache

Looks good – but wait. Let’s not be deceived. We still see a very high value on Server Transfer time. Based on my experience this means that – even though content is retrieved from the cache – the browser sent HTTP requests to the server for each individual resource to “ask” whether the content has been modified (IF-MODIFIED-SINCE) since the last time the resource was downloaded. This is OK if I haven’t checked the web-site for weeks or months, but it is not ok if the last page visit has only been minutes ago.

A closer look at the Network Requests view reveals the problem. The Expires Header is actually set “to the past”. I recorded my session on April 20th 2010 at 09:38GMT. The Expires header is set to April 19th – that was yesterday. That is the reason why my browser has to send an HTTP Request to the server for every “cached” element to check if there is a newer version of the resource on the server or not. The Server Column shows us how much time is spent for each request on the server to determine whether the resource has changed or not. The Wait column tells us how long individual requests had to wait to be processed (this is again caused by the physical network connection limitation – only 2 physical connections are available for a domain – all other requests have to wait).

Expires header in the past causes browser to send IF-MODIFIED-SINCE requests for every cached resource
Expires header in the past causes browser to send IF-MODIFIED-SINCE requests for every cached resource

The Network view shows us almost all HTTP Headers. Due to the nature of the Dynatrace AJAX Plugin in IE we do not get ALL HTTP Headers but we get the most interesting ones. Our users have already requested this feature on our Community Wish List. Right now I propose to use a Network Sniffer or Proxy such as MS Fiddler, HTTP Watch, Charles, … in case you need more detail than the AJAX Edition provides.

How to improve the performance

Theoretically it is pretty simple to improve performance on sites like this. I say theoretically because some of the proposed changes require some work and changes on the web server or web deployment. Here is a list of proposed changes and an estimated performance gain:

  • Use HTTP 1.1 or at least Connection: Keep-Alive: The web-server runs on HTTP 1.0 and forces the browser to close the physical connection after each request. Use Connection Keep-Alive to avoid unnecessary reconnect efforts. 
    • Estimated Gain: 100-200ms (check the Connect Column in the Network View)
  • Far Future Expires Header: for those elements that change very infrequently use an Expires Header in the Far Future
    • Estimated Gain for returning users: 4-6s (depending on how many objects can really be cached long time)
  • Merge CSS: Merging all 22 CSS files into a single CSS file would eliminate Wait Time and reduce Server and Transfer Time due to reduced HTTP Roundtrips
    • Estimated Gain: 1.3s in Wait Time, 1-2s in Server-Time and Transfer Time (assuming we can merge them)
  • Merging JavaScript: 21 JavaScript files come from the main domain. Merging these eliminate Wait Time and reduce Server and Transfer Time due to reduced HTTP Roundtrips
    • Estimated Gain: 300-500ms
  • Domain Sharding: Spreading the 75 images served from the main page on 2 additional image sub-domains allows the browser to download 4 images in parallel. It also allows other content from the main page, e.g.: AJAX Requests, … to be downloaded without waiting for image downloads
    • Estimated Gain: 2-3s


Small things that are often missed – liked wrong Expires header – make a huge difference in web site performance. If the website of Frankfurt’s Airport would have followed some of the best practices from Google or Yahoo or those that we give here at our Dynatrace Blog I am pretty sure many travellers would have been able to reach their web-site on Sunday (even though we would have still been stranded).

As always – here is a nice list of additional blogs and material that I encourage everybody to read: Steve Souders Blog, How to Speed Up sites like by more than 50% in 5 minutes, How to analyze and speed up content rich web sites like in minutes and Webinar with on Best Practices to prevent AJAX/JavaScript performance problems

Andreas Grabner has 20+ years of experience as a software developer, tester and architect and is an advocate for high-performing cloud scale applications. He is a regular contributor to the DevOps community, a frequent speaker at technology conferences and regularly publishes articles on You can follow him on Twitter: @grabnerandi