I'm trying to get a thumbnail for the latest pannenkoek video I'm posting to f/killallgames lol, and even submitting the direct link to its thumbnail doesn't work for some reason...
Comments
nitori OP wrote (edited )
That YouTube problem where you couldn't automatically get the title and thumbnail is actually a different issue. I know a workaround of it which is to submit a link that is direct to the thumbnail itself and then edit in the actual video link and title afterwards, but it's no longer working lol. I even tried to upload the thumbnail to an image host first and then link that one, but this Postmill can't still get the thumbnail... Just to be sure I even posted a Soundcloud link right now which definitely should be working. The title grabber worked but not the thumbnail...
Now that you mention about http/1.1 being forced I actually tried testing if I could get the title grabber in Danbooru to work, and surprisingly it did (it may not look like it since I edited the title but my browser's POST to /ft.json did return a 200). It always forced a Cloudflare challenge on me when I force HTTP/1 in my browser (and annoyingly the Cloudflare cookie expires like every 5 minutes there and their CAPTCHA tend to hang my browser)... So I investigated why Postmill could access Danbooru without any CAPTCHA and as it turns out Danbooru simply doesn't like an HTTP/1.1 client using Mozilla/5.0 in its user-agent string lol. Which I could see I guess if so many scrapers are HTTP/1 and try to pretend to be browsers
so idk what a good general solution here is.
If it's not too much work, perhaps a whitelist which will only do HTTP/2 on specific domains or URLs?
emma wrote
/u/twovests /u/flabberghaster one of you needs to check if postmill-worker is running and hasn't entered a crash loop
If it's not too much work, perhaps a whitelist which will only do HTTP/2 on specific domains or URLs?
i reckon the solution here is to have multiple fetching strategies, with different user-agents and different protocols. special casing for particular sites is doable, but not an approach i'm a fan of. crashes related to http/2 could still occur.
flabberghaster wrote
I won't be able to try until tonight but I will will take a loom when I can.
twovests wrote
Looking rn
twovests wrote (edited )
Uh oh, docker ps does indeed show our postmill-docker-example-php-worker-1 is not alive.
It seems it died trying to download this 5220x4016 png from connections.swellgarfo.com,
01:34:41 INFO [http_client] Response: "200 https://connections.swellgarfo.com/facebookimage.png" 0.050993 seconds ["http_code" => 200,"url" => "https://connections.swellgarfo.com/facebookimage.png","total_time" => 0.050993]
PHP Fatal error: Allowed memory size of 201326592 bytes exhausted (tried to allocate 83854080 bytes) in /app/vendor/symfony/validator/Constraints/ImageValidator.php on line 229
01:34:41 CRITICAL [php] Fatal Error: Allowed memory size of 201326592 bytes exhausted (tried to allocate 83854080 bytes) ["exception" => Symfony\Component\ErrorHandler\Error\OutOfMemoryError { …}]
In ImageValidator.php line 229:
[Symfony\Component\ErrorHandler\Error\OutOfMemoryError]
Error: Allowed memory size of 201326592 bytes exhausted (tried to allocate
83854080 bytes)
Exception trace:
at /app/vendor/symfony/validator/Constraints/ImageValidator.php:229
That image clocks in at about 60MiB, and it tries to allocate 83854080 (about 80MiB), which under a constraint of 201326592 (exactly 192 MiB), which makes me think there's something (PHP interpreter/vm? Malloc call?) with a constant 192 MiB allocated.
I see memory_limit=192m under docker/php/zz-postmill.ini in the Postmill original repo.
Whatever the "right" solution is, I think I can
-
Reboot the server for now, to get the PHP worker up and running -
Increase that value, since our server has about 2GiB of RAM to work with.
Re: Increasing that value, I need to look into the best way to do that. I love Docker but the indirection does leave me mulling over "what is the best way to make this simple change"? (Is that .ini a path which Docker Compose can override? Is this something to finally make a Postmill downstream for?)
TLDR: The short-term solution is a reboot, the medium-term solution is increasing memory allocated; rebooting now!
EDIT: After rebooting, it seems the php worker still does not launch. I forgot how the server launches Docker so I'm re-exploring that now
flabberghaster wrote
Uh oh, docker ps does indeed show our postmill-docker-example-php-worker-1 is not alive
I call it dorker btw.
twovests wrote
for what it's worth, i haven't fixed it yet, but i am now bogged down in "200 pages of severance documents"
flabberghaster wrote
Faaaaawk OK I'll take a look-see
emma wrote
make a zzzzzzzz-jstpst.ini with just the memory limit setting and mount it in /usr/local/etc/php/conf.d, would be my suggestion
twovests wrote
this worked perfectly!!! thank you :D
for anyone who wants to copy this (or, future us), we added roughly this to the docker-compose yaml:
volumes:
- ${PWD}/zzz-postmill.ini:/usr/local/etc/php/conf.d/zzz-postmill.ini
where zzz-postmill.ini had a higher memory. we had to put it in both of the php images
twovests wrote
Thank you! Just to confirm, this would go inside the PHP worker container, right? Or somewhere else
(I'd test directly but I'm on mobile right now, and the container being dead makes it tricky to sh into)
emma wrote
yep
twovests wrote
rad, ty! will try that
I just tried that in the other php container, in the hopes that they somehow share a config. Tried adding a new file and then again by editing the zz-postmill.ini directly. Rebooting each time because I am the worlds worst and most evil sysadmin.
The reason the site was down for a full few minutes was because I was hoping it might be possible to sh into the container if I did so before anyone made a call to the site, but it seems to try to thumbnail the image first thing it does.
I know I need to do the proper thing (set it up so that file is "mounted" from my host filesystem into the container) but it's just a Thing I Gotta Figure Out How To Do
twovests wrote
Sorry to Flabberghaster for forgetting all the details lol, but I re-discovered our postmill.service unit files. Nothing surprising.
It seems indeed the php worker does launch and try - but fails - to download that image
Not in a place rn to do this, but it seems like the thing to do would be to (1) make a Postmill downstream (2) Update the .ini to have a higher memory limit (3) Build new images from that
twovests wrote (edited )
Looking rn
(very impressive that postmill, even in a diminished state, is still an effective tool to communicate about how to fix it. microsoft teams dreams of being this useful)
(*edit: no shade was implied in my wording, i really did mean to express appreciation again. weird day for me)
flabberghaster wrote
We put three of jstpst.net's best minds on the case (emma, twovests, and me) and after two days we got it back working.
Please let us know if we need to bump the memory limit again, but it should be all set for now.

emma wrote
i think youtube doesn't appreciate being requested from vps hosting providers and the like. when i
curl <video url>from a digitalocean vps, it redirects me to https://www.google.com/sorry.but also, when i request from my local postmill instance, i get this:
Unable to read stream contents: OpenSSL SSL_read: OpenSSL/3.5.1: error:0A000126:SSL routines::unexpected eof while reading, errno 0 for "https://www.youtube.com/watch?v=-7VhlsqeeqI".whereas using curl locally is fine.
however, if i disable forced http/1.1 (which was made the default because nghttp2 likes to crash), then it works from my local postmill.
so idk what a good general solution here is.