emma
emma wrote
Reply to Your least favorite desserts GO! by Moonside
turkish "delight"
emma wrote
Reply to comment by nitori in Messing with `network.http.referer.spoofSource` in your browser's about:config will impact (un)pinning of posts in Postmill by nitori
i don't think that's desirable. this is one esoteric browser setting i'm not willing to make concessions for.
emma wrote
just like me
emma wrote
Reply to Crouton Art by astaguru
have you considered a contemporary crouton?
emma wrote (edited )
Reply to comment by nitori in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
but rereading the spec, you're supposed to send a 505 in the representation used by the major version requested by the client
GET / HTTP/2.0
is parsed with http/1 semantics, so i think it makes sense to give any >= 2.x version the HTTP/1.1 505
treatment.
An HTTP/2 request can be sent without negotiation; this is how h2c (HTTP/2 over cleartext) reliably works for me (for some reason I couldn't get Upgrade from http/1.1 to h2c working). It's called "prior knowledge", and curl supports this.
yeah, but as the name implies, you somehow know in advance that the server's gonna accept HTTP/2 if you send those. i suppose 505 here would make sense, if the HTTP/2 support was ever removed. as i understand it, this mode is never going to happen under normal browsing, though.
No, 505 wouldn't be useful because an HTTP/2 request to an HTTP/1-only server would only result in the client just closing the connection itself. You can see this by using nghttp or curl --http2-prior-knowledge against a server that only supports HTTP/1
An HTTP/2-only client (which those two commands earlier are) would not bother to process an HTTP/1 response (if it even gets one) whether that'd be a 505 or 200.
i meant a hypothetical http/2 that's more http/1-like, not the actual http/2 that came into existence which made it very hard to accidentally use the wrong protocol.
anyway, the solution to your woes is apparently to send an error packet or whatever:
HTTP_1_1_REQUIRED (0x0d):
The endpoint requires that HTTP/1.1 be used instead of HTTP/2.
it sounds like it does what you want, but i have no idea if this applies on the stream or the connection level or what.
emma wrote
Reply to comment by nitori in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
I mean the same applies if an HTTP/1 response is a 505, right..?
no, since http/1 requests are sent preemptively without knowing if the server accepts them. http/2+ requests are sent after negotiation, at which point it's established they are accepted. this obsoletes the need for a 505.
505 would have been useful for a future where http/2 requests might be preemptively sent to http/1-only servers. if i send GET / HTTP/2.0
(or any non-1.x version) to nginx, it indeed responds with that status code. but as things turned out, the negotiation mechanism in http/2+ just sidesteps this problem altogether, so 505 ends up being little more than a relic from a time when people didn't know what the future of http held.
since you very much have to opt in for http/2+, incompatibilities with it can be resolved by just not enabling it, and the use cases where one would want partial http/2 support on any given host are extremely contrived, i would argue it's a good thing that support for it is declared on the connection level. it's one less special case for clients to deal with.
emma wrote
hey, that's rude. it's not marc andreessen's fault he looks like that.
emma wrote
Reply to Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
if a server is able to serve an http response as http/2 and/or http/3, then by definition it supports http/2 and/or http/3, despite the status code's claim of the contrary.
emma wrote
Reply to comment by twovests in Why is the www.jstpst.net only a 302 to jstpst.net by nitori
but sadly, there are no funny ones.
i beg to differ. 320 blaze it.
emma wrote
it's engagement bait so people post about it.
emma wrote
Reply to comment by hollyhoppet in 絶対4℃、C74のCD「COLORS」 by nitori
that's when the good videos were uploaded
Submitted by emma in killallgames
emma wrote
Reply to Apparently the project lead of Matrix intentionally shipped side-channel vulnerabilities in their crypto library by nitori
'joke's on you, i knew about the vulnerability all along'
nice one, matt
emma wrote
Reply to comment by 500poundsofnothing in what new events should they add to the summer olympics by hollyhoppet
sounds like a disgusting video. please share the link so i can learn to avoid it.
emma wrote
Reply to "If you look closely, those aren't angle brackets, they're characters from the Canadian Aboriginal Syllabics block" by nitori
i would simply have written my program in a language that had all the features i wanted.
emma wrote (edited )
Reply to comment by nitori in We should use "Cache-Control: immutable" for every file served that we're sure will never change by nitori
It certainly could be added to the nginx that Postmill includes.
Edit to add: Cloudflare doesn't add that stuff, the caching's been tweaked by me over years. I plan to document it in the Postmill wiki sometime in the future.
emma wrote
i think you should go. it's not every day you have the opportunity to hang out with fans of the encyclopedia.
emma wrote
Reply to comment by twovests in donkey kong 64 by twovests
i've changed my mind, now this is the correct one
emma wrote
Reply to Update Day - Nirvana The Band The Show by neku
ninja gayden
emma wrote
Reply to comment by nitori in You know you've gone deep into being a reactionary when you find yourself asking why they introduced keepalive to HTTP by nitori
I'm not much of a fan of ditching plain text for binary, since it makes debugging more complex
I don't think this always holds true, like there was one time at work where an outgoing http request was failing in a strange way, and it took us hours to discover that the environment variable holding the URL in production contained a trailing newline, which the client library didn't pick up on. So this resulted in the following request:
POST /some/shit
HTTP/1.1
X-Some-Header: etc
some payload
If the length of the URL was known ahead of time, as would be typical with a binary protocol, the server would have known the newline was part of it, and handled it accordingly. It wouldn't be friendly as a plain text protocol, but it would make parsing the request very unambiguous and robust.
On the other hand, we see things like http/2 support in curl on Debian 12 being just broken, and the maintainer being too scared to merge the fixes from upstream due to http/2's complexity. So this cuts both ways, I suppose.
Oh, you can write a server that doesn't implement keepalive (while doing everything else 1.1) and still be 1.1-compliant? Well that's neat I suppose!
Yeah, you can just ignore the client's wish for keep-alive and send Connection: close
, according to RFC 7230. I imagine this has to be terrible if the client attempts pipelining.
This might be a cursed opinion but I do actually want all websites to be root/path-agnostic. So if you wanna host Postmill for example but you already have a separate service running in port 80/443, and can't do it in a separate domain (which would require another host in this reality) or port which would have its own root, then I should be able to put it in like /postmill instead.
I believe Postmill supports this, but I haven't tested. I think a lot of devs just ignore the possibility you'd want to host something a subpath, unfortunately.
emma wrote
Reply to You know you've gone deep into being a reactionary when you find yourself asking why they introduced keepalive to HTTP by nitori
ok so like, i've made things for the web for a very long time, including at a time before http/2 and spdy, and http/1.1 has a bunch of very annoying limitations that http/2 solved. i've also written my own http/1.1 and fastcgi servers, to give you an idea of my level of experience. while we can all agree that http/2 is a shitty protocol, and should not have been Like That, it did solve some real problems.
The big one is lack of multiplexing. your html and stylesheets and scripts and images and fonts and other assets get loaded one after the other with http/1.1, and the burden was placed on the developer to figure out the bottlenecks and speed up page loading by placing the assets on separate hosts. We had entire services dedicated to pushing your site through them to try and spot these bottlenecks, and spent a lot of effort trying to fix them. Abominable ideas like shared CDNs for javascript libraries and server-side "compilation" of css and javascript largely stem from trying to work around the lack of multiplexing. Now we can simply serve all and many assets from the same host without thinking too much about it.
FastCGI (a pseudo-http protocol for application backends) is a binary protocol and had multiplexing since it was introduced in the 90s. It is reasonably simple to implement (and I much prefer working with binary <data size> <data>
protocols to http/1.x's plain text protocol), and http/2 really ought to have just been a version of it.
While it's true that pipelining can improve performance without the need for http/2, it was always, fundamentally, the wrong solution. On top of it, i doesn't help that http/1.1's rules for when pipelining requests is allowed are surprisingly complex, and we ended up with a bunch of buggy implementations that led to pipelining being disabled in new stuff.
Mandatory keepalive when you don't send a Connection header?
As you've already discovered, keepalive is actually useful, so I won't go too deep into that. The opt-out mechanisms are very simple (request HTTP/1.0 or send Connection: close
), and the server isn't required to support these, so I don't think this is a big deal.
Virtual hosts? If the spec writers just knew how their little hack would ultimately spell doom for IPv6 quickly replacing IPv4 for everyone they would've gotten second thoughts on it.
I don't think virtual hosts are the reason for IPv6's slow adoption. We have like 1 year old companies pretending they have technical debt from before IPv6's introduction. If virtual hosts didn't exist, I reckon we'd just see as much stuff shoved onto the same host as possible, and more extensive use of the path
parameter in cookies to achieve the same stuff we have separate virtual hosts for in this reality.
Chunked transfer encoding? Ummmmmm, FTP? (Tbh I haven't really familiarized myself in this part lol)
This exists because some http responses are produced before there's a known content length, thus the content-length header cannot be sent. It wouldn't be necessary if one connection handled a single request, though.
emma wrote
Reply to comment by Dogmantra in does jstpst have any meme numbers? by twovests
wow, that's like two funny numbers in one