emma

emma wrote

ok so like, i've made things for the web for a very long time, including at a time before http/2 and spdy, and http/1.1 has a bunch of very annoying limitations that http/2 solved. i've also written my own http/1.1 and fastcgi servers, to give you an idea of my level of experience. while we can all agree that http/2 is a shitty protocol, and should not have been Like That, it did solve some real problems.

The big one is lack of multiplexing. your html and stylesheets and scripts and images and fonts and other assets get loaded one after the other with http/1.1, and the burden was placed on the developer to figure out the bottlenecks and speed up page loading by placing the assets on separate hosts. We had entire services dedicated to pushing your site through them to try and spot these bottlenecks, and spent a lot of effort trying to fix them. Abominable ideas like shared CDNs for javascript libraries and server-side "compilation" of css and javascript largely stem from trying to work around the lack of multiplexing. Now we can simply serve all and many assets from the same host without thinking too much about it.

FastCGI (a pseudo-http protocol for application backends) is a binary protocol and had multiplexing since it was introduced in the 90s. It is reasonably simple to implement (and I much prefer working with binary <data size> <data> protocols to http/1.x's plain text protocol), and http/2 really ought to have just been a version of it.

While it's true that pipelining can improve performance without the need for http/2, it was always, fundamentally, the wrong solution. On top of it, i doesn't help that http/1.1's rules for when pipelining requests is allowed are surprisingly complex, and we ended up with a bunch of buggy implementations that led to pipelining being disabled in new stuff.

Mandatory keepalive when you don't send a Connection header?

As you've already discovered, keepalive is actually useful, so I won't go too deep into that. The opt-out mechanisms are very simple (request HTTP/1.0 or send Connection: close), and the server isn't required to support these, so I don't think this is a big deal.

Virtual hosts? If the spec writers just knew how their little hack would ultimately spell doom for IPv6 quickly replacing IPv4 for everyone they would've gotten second thoughts on it.

I don't think virtual hosts are the reason for IPv6's slow adoption. We have like 1 year old companies pretending they have technical debt from before IPv6's introduction. If virtual hosts didn't exist, I reckon we'd just see as much stuff shoved onto the same host as possible, and more extensive use of the path parameter in cookies to achieve the same stuff we have separate virtual hosts for in this reality.

Chunked transfer encoding? Ummmmmm, FTP? (Tbh I haven't really familiarized myself in this part lol)

This exists because some http responses are produced before there's a known content length, thus the content-length header cannot be sent. It wouldn't be necessary if one connection handled a single request, though.

6

emma wrote

i will put down the pitchfork, for now

to reiterate what i said on the worst chat app in existence, you gotta do the TRUSTED_PROXIES=172.16.0.0/12 thing. this will make postmill accept caddy's x-forwarded-proto: https header that i'm pretty sure it sends to the backend. there is no need to edit the templates, as someone else suggested.

4