Jakob Østergaard Hegelund

Tech stuff of all kinds
Archive for November 2016

HTTP/2: We're doing it wrong

2016-11-18

HTTP/2 is a symptom of a disease; it is a terribly ill conceived deterioration of the otherwise pretty good HTTP/1.1 protocol.

Most really terrible ideas (like a computer in your toaster or a 3D television) usually linger for a while and then die and leave us alone. Of course I was expecting for HTTP/2 to go this route, but yesterday was a shocker:

Looking at integrating with Apple's PSN (Push Notification Service) API, it became apparent they only support HTTP/2 access. Oh dear, I don't yet have an HTTP/2 client library (and frankly I was expecting never to need one).

Let's go over the main features of HTTP/2 one by one. I can start with the only one that is slightly justifiable:

Header compression

Header compression is like Donald Trump. Both are probably very good solutions - to problems that you should not have had in the first place.

Staying on topic here, header compression allows legacy applications that have deteriorated into unmanageable piles of crap and therefore use excessively large headers on all their API requests, to gain some performance advantages by supporting compact and efficient transmission of the bloated crap that is their headers.

I have worked with software development for what is more than a lifetime for some of you who read this - I absolutely understand that you can be in a situation where this is useful. Therefore, I am not against the concept of header compression as such, but I will maintain that it is a solution to a problem what we should strive very hard not to have. There are no good excuses for having bloated headers.

Google with their SPDY extension attempted to add header compression to HTTP/1.1 - frankly I think this would have been a reasonable solution. It would be a hack to support legacy garbage that should really be rewritten - but I absolutely understand the business reasons that could justify improving support for (prolonging the useful life of) legacy garbage applications.

Now, as you can see from the Apple APNS documentation, you can't just use header compression. Apparently there are problems if you use it 'too much', so they warn you to send your headers in very special ways to avoid overflowing header compression tables at their server:

APNs requires the use of HPACK (header compression for HTTP/2), which prevents repeated header keys and values. APNs maintains a small dynamic table for HPACK. To help avoid filling up the APNs HPACK table and necessitating the discarding of table data, encode headers in the following way—especially when sending a large number of streams: ...

This really speaks for itself. So we now have a compression scheme (for no good reason) that is so fragile that we need to pussy-foot around it not to overflow the tables? Really?

Keep-alive

In traditional "old style" networking (you know the kind we use today because it actually works), we would traditionally use TCP to provide us with "connections". On top of the connection layer (possibly with layers in between), we would put our application layer - like HTTP for example. HTTP is a transport protocol that allows us to perform more abstract transactions on top of our reliable connections as provided by the TCP layer.

These days, it's always the OS kernels that actually provide the TCP protocol for us. It was not always like that - applications did once implement their own TCP stacks (or they used libraries that did it for them - but still executed TCP in the application). Why would that be? Why does this code live better in the OS?

The reasons are many. As time went by and networks evolved, a lot of smart people learned a lot of lessons and refined TCP to the point where it is today. Conceptually there's nothing difficult about retransmissions, window sizes, or backing off transmissions in case of congestion. But pulling this off in the real world is difficult. Really really difficult. So difficult in fact, that Linux was the first kernel since BSD (that I know of anyway) to attempt to develop a TCP stack from scratch - everyone else, from Solaris and AIX to Windows and HPUX have refined theirs from the BSD stack.

Despite TCP's best efforts to provide reliable connections, someone have now decided not only that it's not good enough and not worth fixing at the connection layer; but they have even decided that "we" can do better at the application layer. "We" being any application client library and random teenager running cloud services out of his mothers basement. The HTTP/2 PING frame is introduced as a means of checking if the underlying TCP connection is still working; and even Apple encourages it's use for this very purpose.

Really guys? TCP already has keep-alive functionality for this very purpose. And yes, TCP keep-alive is not trouble free, but that is not because of some flaw in TCP, it is because the problem is fundamentally hard in the real world. Moving the problem away from the TCP stack and re-implementing it at the application layer will indisputably make it more expensive traffic wise (TCP layer keep-alive is quite efficiently implemented, something that HTTP/2 fundamentally cannot do because it rides on top of TCP itself). It will also, almost definitely, cause all kinds of traffic problems now that we move the responsibility of sending TCP keep-alive away from the operating system TCP stack (which has decades of refinement in it) into the application layer where history already showed that such functionality doesn't belong (we moved our TCP stacks from the applications to the kernel - remember?).

HTTP/2 ping is a terrible idea. The best we can hope for is that nobody uses it. That boat has sailed already though, as Apple is encouraging the PING frame use for checking the state of TCP connections to their APNS service.

Multiplexing

A wonderful and under-used feature of HTTP/1.1 is that of pipe-lining. You can "stream" any number of requests through your HTTP/1.1 connection without waiting for one request to complete before sending the next. This solves latency problems, allows for servers to concurrently process requests even if you use only a single connection, and it's standard in HTTP/1.1.

People don't get this. And I don't get why. It's super simple. Even big-name applications sometimes use multi-part MIME documents to wrap multiple requests together so they can send them in a single POST - "to solve latency problems" (I kid you not).

Anyway... Some servers apparently do not support pipelining correctly. This is hard for me to believe (having implemented a HTTP/1.1 server that supports it just fine - and that was not hard), but at least that's the word on the street.

For this reason, and apparently for this reason alone, all popular browsers disable HTTP/1.1 pipelining. Yes I know, this is hard to believe but I promise you I'm not making this up.

The solution to this, you ask? If HTTP/1.1 pipelining is mis-implemented, you might think that someone would push to get it fixed. But no... it gets better and as you'll see, I couldn't even make this stuff up if I tried:

HTTP/2 … allows interleaving of request and response messages on the same connection … It also allows prioritization of requests

So... in other words: Basic HTTP/1.1 pipelining is too difficult to implement; therefore we now want to implement a considerably more complex scheme. Really?

In conclusion...

I had honestly thought that HTTP/2 was so obviously ridiculous, and difficult to actually implement reasonably well, that no-one would ever actually employ it in production applications.

It appears that we may not be this lucky. At least Apple is requiring HTTP/2 for their APNS API even though their documentation needs to warn developers about how to work around obvious deficiencies in their implementation (the HPACK restrictions).

Since sanity or even just good taste is obviously not going to save us here, I am left hoping that plain simple laziness will keep a mass migration to the dangerous and broken HTTP/2 from ever happening.

Should we start to see HTTP/2 adoption, I can predict a couple of things:

Actually, these are not even predictions. They are obvious facts that anyone can see. Still, I'm going to pass them as predictions and claim clairvoyance.