When we visit a website, most of us don’t think too much about what’s going on behind the scenes, about the various requests and responses that have to be transmitted and received in order to turn a click on a link into a finished page.

However, for a while at least, it’s something we’ll have to pay a bit more attention to, because it’s changing. 

The standard that most of us use for communicating over the web is known as HTTP/1.1. It’s been around for some time, since 1999 in fact. It also has a number of features that put limits on how fast websites can load. This in turn spawned some clever workarounds by clever web developers. 

Now, the new standard HTTP/2 is here. It delivers quite a range of improvements. Many people will see websites loading faster. However to get the benefits of HTTP/2, site owners will have to undo some of the fixes introduced for HTTP/1.1.

Before we move on, it’s worth mentioning security.

You might have heard that HTTP/2 will only work over a secure connection. This isn’t quite true in that (following much debate) the specification doesn’t require it. However, it looks as though browsers will only support it over TLS, so from a practical point of view, HTTP/2 websites will be secure by default.

This will in itself have implications for performance, but these are beyond the scope of this article.

Why do we need HTTP/2?

Does it ever feel as though, despite ever-improving broadband connections, the web isn’t actually getting any faster?

One reason is that web pages are on average getting bigger. There is more imagery, more video, more interactivity.

Increasingly, we don’t just go to a website to read something. We want to look at cat pictures, watch movie trailers or book a holiday. There is also a virtual epidemic of third-party content on websites, especially retail sites. Advertising, tracking, multivariate testing – it all adds up.

According to the HTTP Archive, the average page size in July 2015 was 2.11MB, up from 1.77MB in July 2014 and just 807kB in 2011.

We’re also more likely to be accessing the web over mobile networks. And while 4G has improved the experience for many of us, it can’t get over the issue of higher latency on mobile. In other words, data generally takes longer to travel between a server and a device on a mobile network.

The bottom line is that HTTP/1.1 isn’t quite up to the job of supporting this brave new worldwide web. 

The good news is that its successor, HTTP/2, fixes a lot of the problems inherent in HTTP/1.1.

However it also means that many websites are going to have to change.

Widely used practices designed to improve a website’s performance will no longer apply. Some could actually have the opposite effect. And while this might seem like an obscure technical issue, it’s going to affect your website in the very near future.

So now let’s look at some of the new features of HTTP/2 and how they’ll make some of the practices that work in HTTP/1.1 redundant.

Multiplexing streams

One of the most important features of HTTP/2 is multiplexing. This means that many (in theory, an unlimited number of) requests and responses can load in parallel over the same connection. 

So what will this mean for performance optimisation? Which current best practices will be affected? 

Domain sharding

Many websites make use of something called domain sharding. Currently, browsers open a limited number of connections (typically six or more) per hostname. This allows the browser to download a similarly limited number of resources in parallel.

To increase this limit, some websites load resources from multiple domains (or shards). For example, if your web page includes lots of images, you might split them between image1.mysite.com, image2.mysite.com and image3.mysite.com.

Multiplexing in HTTP/2 should make domain sharding redundant. In fact, sites that use multiple shards could actually be worse off. This is because sharding comes at a price: whenever a browser visits a new domain, it has to look up the address of that domain.

This is called a DNS lookup and it takes time, delaying the point at which the browser can start loading any content from that domain.

In HTTP/1.1, browsers open multiple connections so that multiple requests and responses can travel in parallel.

In HTTP/2, multiple requests and responses can be sent in parallel over a single connection.

Merging files

In HTTP/1.1, one way to make a web page load faster is to cut the number of objects (files) used on that page, and one way to achieve this (without losing any content) is to merge those files.

You can combine text files of the same type (for example, style sheets) or, in some cases, image files (using a technique called spriting). You can also combine some files of different types. For example, it’s also possible to embed style sheets in HTML files.

There are essentially two reasons why this can be a good idea in HTTP/1.1 and why it’s something you either don’t need to do or shouldn’t be doing in HTTP/2.

We’ve already seen that in HTTP/1.1 there’s a limit to how many files a browser can download in parallel. Merging files means you can deliver more content within that limit.

However in HTTP/2 we have multiplexing, which effectively does away with the limit and eliminates this particular benefit.

The second reason for merging files is that in HTTP/1.1, a server has to wait until a browser asks for a file before it can send it. 

For example, imagine you have a very simple web page. It’s just an HTML file that refers to one external style sheet. When someone visits the page, here’s what happens in the normal course of events:  

  1. The browser requests the HTML file.
  2. The server receives the request and sends the HTML file.
  3. The browser reads the file and discovers that it needs a style sheet. So it requests the style sheet.
  4. The server receives the request sends the style sheet.

It’s possible to shorten this process, and get the page to display faster, by embedding the CSS in the HTML.

However there’s a problem with this approach.

When someone visits your web page for the first time, their browser will normally store different components of that page in its cache for various lengths of time. When that person goes back to the site, their browser will be able to serve some files from the cache, rather than retrieve them from the server. This makes the page load much more quickly.

As the site owner, you can set each file’s maximum 'cache lifetime'. A HTML file tends to change very frequently, so you’ll want to give it a very short cache lifetime. However,CSS files generally change only rarely, so you’ll probably want them to be cached for a long time.

Except that you can’t do this if the CSS is embedded in the HTML. 

This is one major drawback of merging files to reduce the number of object on a web page. Visitors might get a better experience on the first visit to a page, but they could actually be worse off on subsequent visits.

HTTP/2 lets you do things a bit differently. It introduces something called server push. This means that it will be possible for files to be delivered to the browser without the browser having to ask for them:

  1. The browser requests the HTML file.
  2. The server receives the request and sends the HTML file.

It also 'knows' that the browser is going to needs the style sheet. So it sends that too, without waiting for the browser to request it. The browser also has the option to reject the style sheet if it already has it cached.

In reality things are a little more complicated, but this in essence is server push, and it could make your web pages much faster. 

When will sites have to change?

HTTP/2 is actually here already, and one or two organisations such as Twitter are using it.

Browser support is growing. It is fully supported in Chrome, Firefox, Opera and Edge (the replacement for Internet Explorer), and there are a number of server implementations.

If you want to use HTTP/2 now, you can. However for a while at least, you will have to cater for both HTTP/1.1 and HTTP/2. Serving your site via HTTP/2 and accommodating visitors who are still using HTTP/1.1 is going to mean treading a fine line for a while yet.

For now, if you’re building a new site or updating an existing one, it’s a good idea at least to bear in mind that it will need to be optimised for HTTP/2 in the very near future.

Alex Painter

Published 24 September, 2015 by Alex Painter

Alex Painter is web performance director of NCC Group and a contributor to Econsultancy.

3 more posts from this author

You might be interested in

Comments (2)


Dave Harris, Job Title at SMD

Alex, thank you for writing an awesome article with technical terms nicely distilled for the marketing crowd. This is one of the few articles addressing real world solutions that have been worth reading recently. Well done!

over 2 years ago


Alex Painter, Web performance consultant at NCC Group Web PerformanceSmall Business Multi-user

Thanks ever so much for this Dave! It's hard to know where to draw the line and how much detail to include, so this kind of feedback is very much appreciated.

over 2 years ago

Save or Cancel

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.