This is a guest post by Andy Davies, an exceedingly affable independent web performance consultant. -Editor
In early 2015, HTTP/2 (a.k.a. h2) was finally approved as a standard, and it’s now widely supported by modern browsers, servers, load-balancers, and CDNs.
Usage has been growing steadily since, and today 26% of traffic on Akamai’s CDN is HTTP/2.
But unfortunately, as is very common with the adoption of new standards, there are gaps in some HTTP/2 implementations.
Of particular note, the priority in which resources are downloaded in certain implementations can be an area of concern. As the filmstrip-style visual below illustrates, prioritizing resources correctly can make a massive difference in page load time (and, of course, visitor experience).
The first row shows a test page loaded from a site using the Akamai CDN, and the second row shows the same page loaded via another service. Each “frame” represents a half-second of load time, starting at 0.0.
As you can see, the Akamai page loads much faster (1 second for Akamai vs. 4 seconds for the other service). This significant performance difference is due to the way that Akamai effectively prioritizes the download of resources.
To understand why effective prioritization is so important, we need to take a look at how browsers load pages; more specifically, we need to look at how they request resources from a server.
How browsers prioritize downloads
Modern web pages are made up of many resources―stylesheets, scripts, fonts, images, etc.―and some of these are critical to rendering a page.
Originally, browsers downloaded the resources in the order they appeared in the HTML document. As part of the never-ending search for speed, browser engineers soon discovered that prioritizing some resources over others made pages load faster. Today, all modern browsers use heuristic prioritization rules to determine the order in which requests should be made.
These prioritization rules vary between browsers, but they generally aim to prioritize the download of anything that will, if absent, prevent the page from rendering. So, for example, they’ll download stylesheets and scripts before images; resources referenced in the <head> of a page before those in the <body>; etc.
NOTE: These rules change over time, and they’re quite nuanced. If you’d like to learn more about the rules and their nuances, Ben Schwarz wrote an excellent article on resource prioritization called “The Critical Request”.
Once the browser starts to discover the resources it needs to complete page, and once it has assigned them priorities, it needs to fetch them, and the mechanism for this varies between HTTP/1.x and HTTP/2.
Over HTTP/1.1, the browser controls the priority of the requests and ‘drip-feeds’ them to the server. It creates a pool of TCP connections to an origin and then issues the requests in priority order. When a resource finishes downloading and a TCP connection becomes free, the connection is then re-used for the next request in the priority queue.
As the browser continues to parse the HTML page and discover more resources, the requests for them can just be inserted at the appropriate position in the priority queue so they’ll be downloaded in turn.
HTTP/2, on the other hand, only uses a single TCP connection between the browser and the origin. This approach is quite different from the TCP-pool method of HTTP/1.1, as each request/response forms a stream that’s divided into frames and multiplexed over the connection.
With HTTP/2, the browser is no longer constrained by the number of TCP connections available; instead it makes requests as it discovers resources, specifies a priority as part of the request, and relies on the server to return the response data in the appropriate order (the browser can update the priority of a request once it’s in flight, and the server can also override the priority too).
So the browser becomes dependent on the server for prioritization.
NOTE: If you’d like to learn more about the difference between HTTP/1.x and HTTP/2, Stefan Baumgartner beautifully illustrates it in the introduction of his article, “The Best Request Is No Request, Revisited”.
Handling late-discovered resources
Some of these late-discovered resources may also be a higher priority than those that were discovered by the HTML parsers; fonts are one example of this, as browsers will generally block text rendering until they have the font specified.
Under HTTP/1.x, these newly discovered high-priority resources can be inserted into the appropriate position in the browser’s download queue, and they’ll be fetched when a connection becomes available.
But with HTTP/2, the browser may have already made multiple lower priority requests, and it’s now relying on the server to switch from the lower priority requests to the newly discovered higher priority ones.
And that’s where things start to get interesting…
Verifying server behaviour
The industry has long known that there were differences among HTTP/2 implementations and that some servers supported prioritisation more effectively than others.
Nevertheless, until recently, being able to differentiate the ‘well behaved’ servers (i.e., servers which effectively support prioritisation) from ‘poorly behaved’ ones has relied on being able to use low-level tools such as Wireshark. However, in the summer of 2018, WebPageTest added the ability to view when frames were actually being transferred across each stream, and this development made it far easier to identify issues and to determine which servers are ‘well behaved’.
In addition, Pat Meenan (the creator of WebPageTest) built a test case that allows developers to check how servers behaved when higher priority requests arrived after lower priority ones.
Pat’s test page contains multiple low priority images that the browser will discover quickly, and then some other higher priority resources―font, background image, and script―that are discovered later.
Ideally, the server will switch from sending the lower priority resources to sending the higher ones when it receives the request for them.
Unfortunately, as the filmstrip at the start of the post illustrates, not all servers are effective at re-prioritizing the requests. For example, in the graph below you can see that on this server the high priority requests get delayed behind lower priority ones, leading to a poorer experience for visitors.
Requests 33 to 37 in the waterfall below are high-priority resources which were after the lower priority ones. The server fails to adapt for this and continues delivering the lower priority resources, delaying the high priority ones.
Akamai servers, on the other hand, behave as expected and correctly adjust their responses, delivering the high-priority requests ahead of the low-priority ones.
Akamai switches to serving the higher priority resources quickly, but it’s not just a case of the server making the switch from lower to higher priorities.
There are buffers and queues in many parts of every infrastructure, and these are all places where higher priority responses might get blocked behind lower priority ones. In short, we must engineer the whole solution to support prioritization, because issues can arise in many different places.
The charts above are just two examples of how prioritization can differ. There are many CDNs, hosting providers, and servers available; and, as you might expect from such a wide range of options, there’s a full spectrum of results from very good to really bad!
I maintain a GitHub repository that tracks the current status of prioritization issues for many services. If your provider or server isn’t listed and you’d like to test it for yourself, then Pat has made his test case publicly available here. We’d be really grateful if you’d share your results, by creating an issue or a pull request on GitHub.
Our ultimate aim is to raise awareness of the issue and encourage providers to fix any issues they have.
It’s great to see that Akamai and others have implemented effective HTTP/2 prioritization. It can make pages visible and useable faster, and can make a huge difference to visitors’ experiences.
Yet, as the first chart above (and some of the other results in the GitHub repository) shows, not everyone gets it right. So your choice of CDN or hosting provider really does matter.
If you’d like to learn more about HTTP/2, Akamai’s Stephen Ludin and Javier Garza wrote “Learning HTTP/2: A Practical Guide for Beginners”. In addition to being a great overall guide to the protocol, it addresses the exact types of implementation problems we see here.
Andy Davies is a UK-based independent web performance consultant who is fascinated by the technical aspects of performance and the effects of said performance on user behaviour and site success. He has helped some of the UK’s leading retailers, newspapers, and financial services companies make their websites faster. Andy wrote A Pocket Guide to Web Performance, is a co-author of Using WebPageTest, and occasionally blogs about web performance at andydavies.me.