Ask Me Anything with Google, Akamai, and CloudFlare: HTTP/2
Your HTTP/2 questions are finally answered in this Ask Me Anything session with a panel of experts from Google, Akamai, and CloudFlare.
HTTP/2 is on everyone’s minds lately. People are wondering how it will impact their site’s performance, when and if they should migrate completely over from HTTP 1.1, and so on. And the truth is, we should have these questions—we’ve been using the same protocol to deliver content over the web for over 15 years, so some skepticism and concerns are expected when something completely new comes into play.
For this reason, we decided to kick off our first live Ask Me Anything (AMA) presentation, featuring a panel of industry experts to answer your biggest HTTP/2 questions. The panel included Ilya Grigorik, web performance engineer at Google; Tim Kadlec, web technology advocate at Akamai; and Suzanne Aldrich, solutions engineer at CloudFlare.
The panelists tackled an hour’s worth of Q&A and offered their expertise to clear the air and inspire out-of-the-box thinking.
Below is an excerpt from the event. The full transcript will be available soon; you can watch the recording of the AMA here.
The first question is for Ilya. Does optimizing for HTTP/2 automatically imply a poor experience on HTTP 1.1?
Ilya: I think that’s a pretty simple one. The short answer is no. The reason we added HTTP/2 is we found a collection of flaws, if you will, workarounds that we have to do when using HTTP/1. HTTP/2 is effectively the same HTTP that you would use on the web. All the verbs, all the capabilities are still there. Any application that has been delivered over HTTP/1 still works on HTTP/2 as is. If you happen to be running say on CloudFlare, or Akamai, both of which support HTTP/2, Akamai or CloudFlare can just enable that feature, and your site continues to run. Nothing has changed.
From there it just becomes a question of, are there things that I can take advantage of in HTTP/2 to make my site even faster? Say you already have an HTTPS site, then you enable HTTP/2; you’re no worse off. Chances are you may be a little bit better off, but it really depends on your application. And then it becomes a question of, what can I do to optimize? Depending on how aggressive you want to be, you may want to change some of your best practices. You stop doing some of those things, and I think we’ll get into the details of some of those later.
Is HTTP/2 serving all assets like your CSS, images, and JavaScript independently a better option than creating bundles or grouping? Which performance hacks are no longer needed?
Tim: Yeah, I think it’s similar to Ilya’s question. There’s going to be times where it doesn’t work out that way. The most famous example of it that we’ve seen is Khan Academy’s blog post where they were talking about bundling JavaScript. They went from something like 25 different JavaScript packages to 300 or something like that, and saw degradation of performance and degradation in compression. The takeaway there isn’t necessarily that you shouldn’t break these things up in individual files, the takeaway there is that there is some point where packaging still makes sense to an extent.
There’s a line in there where this ceases to be beneficial. We’ve got a lot of these best practices that we’ve had established for a long time—the sharding, the inlining of the resources, concatenating files. We also recognize that inside of H2, some of these things on paper make a lot less sense. What we need now is the real world experimentation and data to help back that up and determine exactly when it makes sense to do it and when doesn’t it. There’s a challenge there too; H2 is young, the implementations in the browser are young, the server’s young.
A lot of the challenge is the problem with the protocol, browser implementation, maturity of the server implementation. There’s just so much variability. This is definitely day zero in terms of H2 in establishing what these practices are. It’s going to take a lot of experimentation before they can firmly cement what makes sense to do in this world.
Ilya: Just to add to what Tim was saying, the Khan Academy example is really interesting because, as Tim said, they went from 25 files to 300. I know they actually say that 25 files is a performance problem with HTTP/1. Chances are if you are 25 in HTTP/1, you’re already thinking about how to collapse it to 5. With HTTP/2, that’s not a problem. You can easily ship those 25. Then the question becomes, should I unpack the 25 into 300? Then you get into more nuanced conversations, well what’s the overhead of this request, and all the rest. This is the space were you really have to experiment. As Tim said, measure it. Measure it in your own application.
Do you feel like there’s a risk of creating two Internets, a slow one and a fast one, due to the possible confusion of having different protocols out there at the same time?
Suzanne:
I think that there already is little bit of a problem with the slow web and the fast web with regard to delivery over TLS. With the introduction of HTTP/2, since it is implemented in TLS only through all the browsers, it is necessary to deploy it over TLS. One of the advantages of using TLS you can utilize HTTP/2. So, in fact, it’s going to even out the playing field a bit in that way, and reduce the TCP of handshake overhead that we see. That’s one of the reasons that people, sometimes a little averse to utilizing TLS.
However, and this is harkening a little bit back to the point that Ilya pointed out, which is that these can certainly coexist. You can still utilize technology that is, such as sharding of domains, when you’re utilizing HTTP/2. If you’re using the same IP address among the hosts, then it is possible for the same TCP/IP connection to be used for that particular set of requests. You can still use all the multiplexing, and take advantage of that pipeline without degrading the performance for HTTP/1 clients.
In addition to that, because of the fact that there’s this transition point between SPDY and HTTP/2, there’s an additional concern there. People don’t want to necessarily jump into the deep water yet, so how do you enable developers to go ahead and produce applications without the fear they’re cutting out a large majority of the market share browsers? I thought an interesting technique that we utilized to your CloudFlare was to essentially fork our implementation of NGINX, to allow for lazy loading of SPDY if the browser supports that. If it supports HTTP/2, then we’ll go ahead and make the connection for them in that mode. That way I think that we’ve really addressed that concern in particular.