101 on HTTPS Web Site Performance Impact

I recently analyzed a secure web page that took 20 seconds till the onLoad event triggered. Once on the site – interacting with the site was slow as well – especially when dealing with dynamic content like the auto completion for the search field or dynamic popup menus.

The site had no fancy images, no animations or heavy JavaScript during startup. It turned out that 70% of the time was spent in SSL handshaking. With a simple change on the web server the 14 seconds could be trimmed down to 2. Before explaining this exact problem lets start with some background and then lets see what can go wrong with HTTPS web site.


Secure connections in web applications are a necessity when dealing with sensitive data such as your financial information in your online banking system or your customer data in your online CRM. To secure the transferred content between your web server and the browser you use HTTPS. There is obviously overhead involved when establishing a secure connection vs a plain HTTP connection. In order to keep the overhead as low as possible for your web site you have to understand some of the underlying basics.

What it takes to download secure content by the browser?

Every browser has multiple physical connections per domain that are used to download the initial HTML page and then all the referenced embedded objects. The number of physical connection depends on the browser version. Each physical connection needs to establish the connection to the web server. In case it is a secure connection it requires an SSL Handshake as explained in the following blog post: The SSL Handshake.

As long as the connection is kept alive – secure communication is possible without additional handshaking. Once a connection is closed (e.g.: forced by the server) the browser needs to reconnect for the next request including the costly SSL Handshake.

The browser is also using different physical connections for different domains – meaning – that static content (images, scripts, css, …) that is delivered from a different domain must establish its own connection. If these objects are also served secure (intentional or not) you have to deal with the extra handshake cost.

Lets have a look at a sample page and all its depending requests and lets see how many SSL handshakes are necessary:

https://www.mydomain.com -> 1st phys. connection on www.mydomain.com1st handshake
https://www.mydomain.com/logo.png -> reuse 1st phys. connection -> no handshake
https://www.mydomain.com/background.png -> 2nd phys. connection to www.mydomain.com2nd handshake
https://image1.mydomain.com/image1.gif -> 1st phys connection on image1.mydomain.com -> 3rd handshake
https://image1.mydomain.com/image2.gif -> 2nd phys connection on image1.mydomain.com -> 4th handshake
https://image1.mydomain.com/image3.gif -> reuse 1st phys connection -> no handshake
https://ssl.google-analytics/siteopt.js=… -> 1st phys connection on ssl.google-analytics.com -> 5th handshake
https://scripts.mycdn.com/main.js -> 1st phys connection on scripts.mycdn.com -> 6th handshake

I guess you see that there is a lot of handshaking going on that you may not be aware of. The example above shows a browser with 2 physical connections. In case you deal with a browser that uses more than 2 connections, image3.gif would also need to do an SSL handshake.

Considerations when using HTTPS

There are different things to consider in order to not run into HTTPS related performance problems. Some of these considerations are also valid for non-HTTPS applications but result in a higher performance hit due to the handshaking overhead. The problem that I described in the intro section boiled down to be caused by the web server that forced the browser to close the connection after every request resulting in SSL handshakes for every request that was still to come. There are other – more generic things that you should consider that can speed up your website performance. Let’s take a closer look by starting with the problem I had.

Use Connection Keep-Alive

Using the Dynatrace AJAX Edition I analyzed the URL in question and was surprised to see the following Summary View

Dynatrace AJAX Edition Summary View

The Page Load Time column tells me the time until the onLoad event was triggered for this URL. The Network Pie Chart shows me all the network activity on that page (also activity that happened after the onLoad event was triggered) divided into Connection Time, Server Busy and actual Transfer time. And this is where it gets interesting. On the total page I had 24 seconds spent in establishing the physical network connection. Why is that? How can an SSL handshake take that long?

Drilling from here into the Network View showed me each individual Network Request and how much time was spent on the individual tasks (Connect, Transfer, Server, Wait):

Dynatrace AJAX Edition Network View

This view shows me that almost every request (except those that came from the local browser cache) had to take a huge hit in network connection time. The connection time for each single network request ranges from 0.5 to 1.1 seconds. If you sum that up you get to the very slow page load time. But why is that?

Looking at the HTTP request details you can see that the web server responded with Connection: close instead of keep-alive. This forces the browser to close the physical connection after the request and requires the next request to re-establish the connection (including the costly SSL handshake). This is true for all the requests to load the initial page as well as all requests triggered by JavaScript (AJAX) when modifying the page (loading dynamic menus or requesting auto complete when typing search keywords).

The solution to this problem is to enable connection keep-alive on the web server so that the browser can send subsequent requests on the same established SSL connection.

General Optimization Rules

There are many general optimization rules that obviously help here as well. Like – limiting the number of web requests, using CDNs (Content Delivery Networks), using client side caching, avoid redirects, …

For a complete list check out the Best Practices for Speeding Up Your Web Site

Use Domain Sharding for HTTPS scenarios?

One concept to overcome the physical connection limit of browsers is Domain Sharding. If you have many embedded objects like images, style sheets or Javascript files it is a common practice to put these on a separate domain. The browser will then have another set of physical connections to download the content from these domains. The problem with secure content is that every physical connection that is opened needs to do the expensive SSL handshake – therefore resulting in higher page load time. So – you want to be careful with that.


What are your thoughts on that topic? What about using mixed content? Meaning – using HTTPS for secure content and HTTP for insecure content (like images, scripts …)  on the same page? You probably end up with many warning messages in the browser – but would this be something to accept to gain faster page speed? Let me know your thoughts.

Andreas Grabner has 20+ years of experience as a software developer, tester and architect and is an advocate for high-performing cloud scale applications. He is a regular contributor to the DevOps community, a frequent speaker at technology conferences and regularly publishes articles on blog.dynatrace.com. You can follow him on Twitter: @grabnerandi