@sodiboo@gaysex.cloud
lets see is it more stable now
@sodiboo@gaysex.cloud
heck yeah. it totally is. sick. thank you to systemd-socket-proxyd for having a ridiculously low default max connection limit.
lets see is it more stable now
heck yeah. it totally is. sick. thank you to systemd-socket-proxyd for having a ridiculously low default max connection limit.
the default is 256. tons of Failed to accept() socket: Too many open files.
increased limit to like, 4096. no more of that warning. i think. and then i looked at fds and it had like 400.
so. yeah. >256 necessary. why is it limited at all can't you just respect the rlimit for max fds? kernel will enforce it for you. why do you have your own limit. you do exactly one thing and the kernel can put boundaries on that. there is no reason to have this limit in the app too.
Wait. No . Fuck.
It doesn't work.
It's not that limit????
It is hitting the kernel nofile limit. With an "increased" connection limit, it's capping at 1024 FDs. lmao. this is the default soft limit
systemd. LimitNOFILE=: "Do not use. [...] applications should increase their soft limit to the hard limit on their own".
hm.systemd-socket-proxyd. why don't you increase this limit on your own.
worst part of testing changes that only matter under load is that the load has passed and now i'm just staring at my terminal waiting for something to happen so my instance is under load again. i guess i should post a new meme or something
stress test me
okay so that didn't work at all but FINALLY now with an increased nofile limit, i hit the connection limit, which i intentionally decreased back to 256 because i wanted to test my hypothesis.
and indeed now i get the actual error message for connlimit. "Hit connection limit, refusing connection."
yay. time to increase it again
new way to stress test my instance (and my poor CPU at home as well)
@sodiboo@gaysex.cloud zen browser user spotted