Recent comments in /f/technology
cowloom wrote
Services like invidio.us, and the ActivityPub sphere, that allow breaking through into JS tracking-laden "web applications" with simple HTML and CSS are a demonstration of a possible future of a web without the useless cruft. Much to the chagrin of its gatekeepers.
Invidious is just fantastic, and I am saddened that Google is trying to strangle it. I browse the web with JavaScript completely disabled, so Invidious and yt-dlp are the only way I am able to watch YouTube videos at all.
nitori OP wrote
Reply to comment by twovests in wrestling the web from corporate control requires making it boring again by nitori
Rapid, calendar-based release schedule really ruined it. New versions used to feel like real milestones...
twovests wrote
i remember firefox 3 being a huge celebrated event. we are now on firefox 130.
thank u for sharing this post
twovests OP wrote
Reply to comment by emma in Duolingo is violating Apple's copyright by having the hit regions for its keys in its Music lessons resize dynamically as you play. I am angry at Duolingo for AI reasons, how do I report this patent violation to Apple? by twovests
thank u so much
will be doing so. i think the steve jobs email must be monitored so this will be funny
emma wrote
Reply to Duolingo is violating Apple's copyright by having the hit regions for its keys in its Music lessons resize dynamically as you play. I am angry at Duolingo for AI reasons, how do I report this patent violation to Apple? by twovests
i believe, based on my legal experience (eight ace attorney games, and i've followed the lawsuits by this guy who cheats in donkey kong), that this is a patent issue, not a copyright one.
so, first, you need to know the patent being violated. i believe it to be this one: https://patents.google.com/patent/US20210349631A1/en
then you need to email steve jobs and tell him that this scummy outfit is misappropriating his invention.
cowloom wrote
Reply to The Cursed Computer Iceberg Meme by nitori
I'm proud to say I actually know a few of those
flabberghaster wrote
I think not everything needs to be HTTPS; like I don't care if the NSA knows I'm reading web comics generally speaking. But the push for everything to be https is kind of more about the non technical users, who don't understand what should and shouldn't be.
You want them to be mistrustful of a non HTTPS site that asks them for payment or login information, because it's marginally harder to set up a phishing site with a valid cert (or it was...) Than it is to just make it straight HTTP so the browser doesn't say "yo dude this site's cert is a little fishy".
That and there were cases of people getting their login credentials stolen at the coffee shops because bad webmasters were not securing things they needed, and now most browsers won't even let that happen. So I think it is marginally better.
nitori OP wrote (edited )
Reply to comment by emma in Did "HTTPS Everywhere" really make the internet safer, secure, and faster? [Aa] by nitori
Oof yeah https in localhost fucking sucks lol. And funny you mention that since yesterday I did some python exercise in university where I basically made a very simple TLS server and a TLS client connecting to it exchanging raw data. It's supposedly an example of a "VPN" for my "Information Assurance and Security 2" course but I didn't see any VPN or IPsec shit in the sample code lol (professor still approved tho when I showed the code working). But it did need a self-signed cert in the server and the client specifically trusting that cert in its cafile=
for ssl.create_default_context
, which the lecture didn't hint at all, or try to disable the certificate verification in the sample code given (just learned right now I could've added CERT_NONE
in the ssl context to disable cert verification, but eh :P)
emma wrote
i've seen projects be deployed to production with NODE_TLS_REJECT_UNAUTHORIZED=0
, thus disabling certificate verification for any tls connection made by the application, because now we need https during local development which is a huge pain in the butt to set up.
nitori OP wrote
Reply to comment by anethum in WebP: The WebPage compression format by nitori
dear god
anethum wrote
Reply to WebP: The WebPage compression format by nitori
i actually lean towards liking webp so i'm going sickos yes reading this
emma wrote
Reply to comment by nitori in Messing with `network.http.referer.spoofSource` in your browser's about:config will impact (un)pinning of posts in Postmill by nitori
i don't think that's desirable. this is one esoteric browser setting i'm not willing to make concessions for.
nitori OP wrote
Reply to Messing with `network.http.referer.spoofSource` in your browser's about:config will impact (un)pinning of posts in Postmill by nitori
u/emma I wonder if we actually need to use the browser's Referer for the (un)pin function. Wouldn't it be better if the user gets redirected to the forum's page always anyway so they can see clearly the effect of the pin?
nitori OP wrote
Reply to comment by twovests in HTTP 1.2 Released with Improved Support for Hierarchies and Text-Menu Interfaces by nitori
There's a Medium article I read which is probably AI-written stating that "HTTP/1.2" was made in 2009 lol
twovests wrote
2011
awwh
nitori OP wrote
Both Basic and Digest access authentication are improved to provide a better native-looking browser-based experience than form-based authentication.
Oh how I wish we got Cookie-based authentication implemented straight in HTTP itself instead of having to use forms...
The spec has been updated with a new set of accepted headers - and in a break with past tradition, any header not in the list of accepted headers is to be rejected by a compliant server.
Wait that just breaks backwards compatibility with HTTP/1.1, how can this joke protocol be 1.2 lol
nitori OP wrote
Actually perhaps we might not need compression for the response headers even, but some sort of ETag.. There'd be like a Headers-ETag
for the unique value and Headers-ETag-Names
(I'm not satisfied with this name but can't think of something better) for the list of redundant headers to not be repeated in subsequent requests
nitori OP wrote
Reply to comment by nitori in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
Anyway while it's cool to know that the HTTP/2 connection error code exists, I have zero clue on how to make nginx return it (or force it in a situation where it returns it) lol
https://github.com/search?q=repo%3Anginx%2Fnginx+HTTP_1_1_REQUIRED&type=code
nitori OP wrote (edited )
Reply to comment by emma in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
as i understand it, this mode is never going to happen under normal browsing, though.
I don't think any of my scenarios are normal at all lol :P
HTTP_1_1_REQUIRED (0x0d)
I definitely did not know about this until now, thanks! And searching online it seems like curl does retry its request in HTTP/1.1 if it encounters this. Personally I think it still would've made more sense for the HTTP/2 authors to extend 505 instead, especially since they kept the 1.1 response codes from 2xx-5xx (except 426) anyway, and you can explain to the user why you can't support HTTP/2 for the request in a 505's body... But glad to know there's an error code that can signal to the client to downgrade
nomorepie wrote
We will have no choice but to find out I'm sure
emma wrote (edited )
Reply to comment by nitori in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
but rereading the spec, you're supposed to send a 505 in the representation used by the major version requested by the client
GET / HTTP/2.0
is parsed with http/1 semantics, so i think it makes sense to give any >= 2.x version the HTTP/1.1 505
treatment.
An HTTP/2 request can be sent without negotiation; this is how h2c (HTTP/2 over cleartext) reliably works for me (for some reason I couldn't get Upgrade from http/1.1 to h2c working). It's called "prior knowledge", and curl supports this.
yeah, but as the name implies, you somehow know in advance that the server's gonna accept HTTP/2 if you send those. i suppose 505 here would make sense, if the HTTP/2 support was ever removed. as i understand it, this mode is never going to happen under normal browsing, though.
No, 505 wouldn't be useful because an HTTP/2 request to an HTTP/1-only server would only result in the client just closing the connection itself. You can see this by using nghttp or curl --http2-prior-knowledge against a server that only supports HTTP/1
An HTTP/2-only client (which those two commands earlier are) would not bother to process an HTTP/1 response (if it even gets one) whether that'd be a 505 or 200.
i meant a hypothetical http/2 that's more http/1-like, not the actual http/2 that came into existence which made it very hard to accidentally use the wrong protocol.
anyway, the solution to your woes is apparently to send an error packet or whatever:
HTTP_1_1_REQUIRED (0x0d):
The endpoint requires that HTTP/1.1 be used instead of HTTP/2.
it sounds like it does what you want, but i have no idea if this applies on the stream or the connection level or what.
nitori OP wrote (edited )
Reply to comment by emma in Why are browsers not automatically downgrading to HTTP/1 when they encounter a 505 in HTTP/2 or HTTP/3 by nitori
Hmm I don't think nginx is correct to send a 505 in that case. I actually thought as well before that it was correct, but rereading the spec, you're supposed to send a 505 in the representation used by the major version requested by the client. But nginx does it in 1.1's representation instead of 2.0:
GET / HTTP/2.0
HTTP/1.1 505 HTTP Version Not Supported
Server: nginx
[...]
A more appropriate response might be 400 or 500, since HTTP/2 obviously isn't plain text, and the client is trying to do a HTTP/2 request in HTTP/1 format which is wrong.
Having said that..
http/2+ requests are sent after negotiation, at which point it's established they are accepted. this obsoletes the need for a 505.
An HTTP/2 request can be sent without negotiation; this is how h2c (HTTP/2 over cleartext) reliably works for me (for some reason I couldn't get Upgrade
from http/1.1 to h2c working). It's called "prior knowledge", and curl supports this.
Even if negotiation becomes strictly required (which Google and Mozilla wanted by requiring TLS) in all of HTTP/2, I don't think 505 is obsolete. If for some reason you want to sunset HTTP/2 and have users use HTTP/5+, while not leaving those still stuck with HTTP/2 in the dark, how would you signal to them that you refuse to support HTTP/2? A 505 would be able to fulfill that role, and indeed this is one of its intended purposes when it was first proposed.
505 would have been useful for a future where http/2 requests might be preemptively sent to http/1-only servers.
No, 505 wouldn't be useful because an HTTP/2 request to an HTTP/1-only server would only result in the client just closing the connection itself. You can see this by using nghttp
or curl --http2-prior-knowledge
against a server that only supports HTTP/1
An HTTP/2-only client (which those two commands earlier are) would not bother to process an HTTP/1 response (if it even gets one) whether that'd be a 505 or 200.
since you very much have to opt in for http/2+, incompatibilities with it can be resolved by just not enabling it, and the use cases where one would want partial http/2 support on any given host are extremely contrived
Heh, perhaps. :D Maybe in an earlier time where a webmaster really wants (or needs, because maybe some impatient stockholder is forcing their client to deploy h2 even if they're not fully ready) the benefits of HTTP/2 (multiplexing is pretty cool after all) as soon as possible but have parts of their website not yet ready for the new version, this could've been pretty relevant...
twovests wrote
i hope it is bad for me and society
nitori wrote
Reply to If WordPress is to survive, Matt Mullenweg must be removed by neku
Yeah this looks a lot worse and shadier than whatever Mozilla is cooking with their Foundation / Corporation split lol. At least from what I understand from Mozilla's structure, the money seem to always end up in the Foundation (so the Corporation is just another front to fundraise), while here it's the opposite