Recent comments in /f/technology

nitori OP wrote (edited )

Excellent write-up as always emma :D

I'm not much of a fan of ditching plain text for binary, since it makes debugging more complex (compared to 1.1 where you can just telnet lol), though I do realize that it's necessary if multiplexing is going to be a thing. Idk, is all of this added complexity really worth it just to shave off probably just the same as pipelining would do? In an ideal world where pipelining would only help the websites that really need it even with so many optimizations already applied and considered and where pipelining implementations in servers, clients, and proxies are perfect, I don't think so. But we don't live in that world, and frustratingly I suppose multiplexing is the way to go...

Idk I just wish that for every performance improvement we make, I can just be excited and not think about how webdevs are just going to ruin everything and add so much shit on top of the shit that became a non-factor due to those improvements that the improvements become meaningless again. Instead of "hmm how do we make the web go back to square one >:)" we just go "wow this is amazing we've reached peak I think :D"

Anyway I do wholeheartedly agree that pipelining is fundamentally wrong (even though it does work if it works), it just looks like a silly hack lol.

the server isn't required to support these

Oh, you can write a server that doesn't implement keepalive (while doing everything else 1.1) and still be 1.1-compliant? Well that's neat I suppose!

If virtual hosts didn't exist, I reckon we'd just see as much stuff shoved onto the same host as possible, and more extensive use of the path parameter in cookies to achieve the same stuff we have separate virtual hosts for in this reality.

This might be a cursed opinion but I do actually want all websites to be root/path-agnostic. So if you wanna host Postmill for example but you already have a separate service running in port 80/443, and can't do it in a separate domain (which would require another host in this reality) or port which would have its own root, then I should be able to put it in like /postmill instead.

Like think about it, CDNs like Cloudflare centralizing every damn website like we have right now wouldn't just be feasible without IPv6. Anycast is out of the question and each website under the CDN would require its own IP. The only way for this to go wrong is if every ISP just sold all of their address spaces to the CDNs and NATed the hell out of IPv4 that our own CG-NATs would sweat in fear of what we have created. But that's so ridiculous pessimistic imo that I don't think it will just happen. Well, hopefully.. :P

This exists because some http responses are produced before there's a known content length, thus the content-length header cannot be sent. It wouldn't be necessary if one connection handled a single request, though.

Oh yeah this is actually good lol, silly me :P

Looking into it more it seems like in HTTP/1.0 when there's no Content-Length, the client just assumes the transfer is successfully complete when the connection is closed. Which isn't good because we don't actually know whether the transfer was actually successful or it just got interrupted. 1.1's chonk stuff seems to be for that :D (EDIT: Actually maybe not but still neat regardless)

5

emma wrote

ok so like, i've made things for the web for a very long time, including at a time before http/2 and spdy, and http/1.1 has a bunch of very annoying limitations that http/2 solved. i've also written my own http/1.1 and fastcgi servers, to give you an idea of my level of experience. while we can all agree that http/2 is a shitty protocol, and should not have been Like That, it did solve some real problems.

The big one is lack of multiplexing. your html and stylesheets and scripts and images and fonts and other assets get loaded one after the other with http/1.1, and the burden was placed on the developer to figure out the bottlenecks and speed up page loading by placing the assets on separate hosts. We had entire services dedicated to pushing your site through them to try and spot these bottlenecks, and spent a lot of effort trying to fix them. Abominable ideas like shared CDNs for javascript libraries and server-side "compilation" of css and javascript largely stem from trying to work around the lack of multiplexing. Now we can simply serve all and many assets from the same host without thinking too much about it.

FastCGI (a pseudo-http protocol for application backends) is a binary protocol and had multiplexing since it was introduced in the 90s. It is reasonably simple to implement (and I much prefer working with binary <data size> <data> protocols to http/1.x's plain text protocol), and http/2 really ought to have just been a version of it.

While it's true that pipelining can improve performance without the need for http/2, it was always, fundamentally, the wrong solution. On top of it, i doesn't help that http/1.1's rules for when pipelining requests is allowed are surprisingly complex, and we ended up with a bunch of buggy implementations that led to pipelining being disabled in new stuff.

Mandatory keepalive when you don't send a Connection header?

As you've already discovered, keepalive is actually useful, so I won't go too deep into that. The opt-out mechanisms are very simple (request HTTP/1.0 or send Connection: close), and the server isn't required to support these, so I don't think this is a big deal.

Virtual hosts? If the spec writers just knew how their little hack would ultimately spell doom for IPv6 quickly replacing IPv4 for everyone they would've gotten second thoughts on it.

I don't think virtual hosts are the reason for IPv6's slow adoption. We have like 1 year old companies pretending they have technical debt from before IPv6's introduction. If virtual hosts didn't exist, I reckon we'd just see as much stuff shoved onto the same host as possible, and more extensive use of the path parameter in cookies to achieve the same stuff we have separate virtual hosts for in this reality.

Chunked transfer encoding? Ummmmmm, FTP? (Tbh I haven't really familiarized myself in this part lol)

This exists because some http responses are produced before there's a known content length, thus the content-length header cannot be sent. It wouldn't be necessary if one connection handled a single request, though.

6

nitori OP wrote

I think TCP FO should be the way to go since it's more elegant imo than keeping a connection open, though unfortunately ossification means it will take a very long while to get all TCP-based services and clients to support it.. There's also privacy issues with its cookies

As for SSL, if we just had tcpcrypt or any other opportunistic encryption we wouldn't need Let's Encrypt or any free TLS lol (I feel like TLS has been abused too much, it should've been more about identity verification than encryption). I'm actually hopeful for Yggdrasil since it's an IPv6 mesh network where end-to-end encryption between IPs is the norm and each IP is a public key

4

flabberghaster wrote

IDK I think there is a use for keeping the same stream open if you're a big website serving a lot of clients tbh. Each TCP handshake takes three packets minimum (unless you use TCP fastopen which is its whole own thing), and then if you want SSL on top of that there's even more latency, especially for slow connections, plus the computation, which is small per request but if you're a big site serving a lot of people it adds up. Even if you're not jamming your page full of ten trillion google ads it can add up.

Using the same connection again if you expect the client to make another one pretty soon makes a lot of sense.

I don't do web dev tho so what do I know.

7

nitori OP wrote (edited )

also why u no support HTTP/1.0 (which also means no HTTP/0.9) :(

When trying to use http/1.0 and http/0.9 ALPN:

$ openssl s_client -connect jstpst.net:443 -servername jstpst.net -alpn http/1.0
CONNECTED(00000003)
4027744A687F0000:error:0A000460:SSL routines:ssl3_read_bytes:reason(1120):../ssl/record/rec_layer_s3.c:1584:SSL alert number 120
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 327 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

When I fake ALPN to http/1.1:

$ openssl s_client -connect jstpst.net:443 -servername jstpst.net -alpn http/1.1
CONNECTED(00000003)
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = E6
verify return:1
depth=0 CN = jstpst.net
verify return:1
---
[ssl certs and blah blah blah...]
---
read R BLOCK
GET / HTTP/1.0

HTTP/1.0 200 OK
Alt-Svc: h3=":443"; ma=2592000
Server: Caddy
Date: Tue, 23 Jul 2024 07:38:39 GMT
Content-Length: 0

closed
2

twovests wrote

(CW: More explicit references to domestic abuse)

I think security folks tend to think of security against a genius hacker with endless resources, which is a good mindset to have when you're building software and cryptography. But this mindset also makes a lot of security folks obstinately oblivious to reality.

I can't imagine what level of collective delusion the people at Microsoft must be under that they would advertise Windows Recall as a good feature. They must be aware of the blood that will be on their hands, right?

It feels almost like that's the point? "Windows with CoPilot + will help you keep tabs on you and yours, every step of the way."

3

hollyhoppet OP wrote (edited )

the company is extremely bullish right now on automation as a cost saving measure so unless it's something directly unethical i don't think i have much room to call it out. also yeah we're not hosting our own models it would be through chatgpt lol

best case i can say "i don't know if this will work very well" and do whatever they ask. best best case is i'm only tangentially doing something to enable another team's integration.

3

twovests wrote

Oh man :\

I'm assuming this isn't a niche case where integrating an LLM makes sense right?

Perhaps you could raise high standards for the business justification and value of adding an LLM. Note the reputational risk of appearing to chase gimmicks at the expense of user experience. Maybe your app demographic is one which would be alienated by adding LLM garbage?

The company I work for has a natural-language processing powered tool and we've still not integrated new LLMs into it AFAIK. (The only information I have about this is what's public knowledge, to note)

Either way, good luck!! If you have to do the LLM integration I hope you can at least host your own models and you can at least make it known how poorly interpretable and how poorly predictable LLMs are.

3