nitori

nitori wrote

Is there no way to disable that password time out behavior in run0? I'm not even sure why they even bothered having that behavior by default lol

Back when I daily-drove Devuan (now I'm back to Windows when I got a new laptop) I just used, IIRC the command parameters correctly, plain 'ol su -c -. It's pretty much just a simpler sudo (even more simpler than doas lol), just without the ability remember my password for a while (which is a bit annoying yea but I have fast fingers and I like typing anyway). Since I'm really the only user of that machine I thought I didn't really need something like sudo

3

nitori OP wrote

that's the thing with touhou songs, right? no matter which song, you're familiar with at least one version of it

Yeah true!

Funnily I had trouble trying to discern the Evening Star in that SOUND HOLIC arrange you linked, I had to listen to the original to figure out which part they mostly arranged. That's one of the pet peeves I have with Touhou arranges: some of them are difficult to hear how they are supposed to be an arrange of the original Touhou BGM they remixed from lol

2

nitori OP wrote (edited )

I wonder if JavaScript is even needed at all if one just wants to keep out badly-written scrapers that DDoSes their server. If the scraper doesn't keep state then simply return a 418, require a cookie to be set with Set-Cookie, and use meta refresh?

2

nitori OP wrote (edited )

That YouTube problem where you couldn't automatically get the title and thumbnail is actually a different issue. I know a workaround of it which is to submit a link that is direct to the thumbnail itself and then edit in the actual video link and title afterwards, but it's no longer working lol. I even tried to upload the thumbnail to an image host first and then link that one, but this Postmill can't still get the thumbnail... Just to be sure I even posted a Soundcloud link right now which definitely should be working. The title grabber worked but not the thumbnail...

Now that you mention about http/1.1 being forced I actually tried testing if I could get the title grabber in Danbooru to work, and surprisingly it did (it may not look like it since I edited the title but my browser's POST to /ft.json did return a 200). It always forced a Cloudflare challenge on me when I force HTTP/1 in my browser (and annoyingly the Cloudflare cookie expires like every 5 minutes there and their CAPTCHA tend to hang my browser)... So I investigated why Postmill could access Danbooru without any CAPTCHA and as it turns out Danbooru simply doesn't like an HTTP/1.1 client using Mozilla/5.0 in its user-agent string lol. Which I could see I guess if so many scrapers are HTTP/1 and try to pretend to be browsers

so idk what a good general solution here is.

If it's not too much work, perhaps a whitelist which will only do HTTP/2 on specific domains or URLs?

3