@pearl@fedi.rrr.sh
@xssfox@cloudisland.nz a lot of them will scrape a page, then scrape it a few hours later in case it changed in the last few hours, for every page on a website, including ones that are expensive for the server to handle
@0x4d6165@wanderingwires.net
@pearl@rrr.sh @xssfox@cloudisland.nz i wonder, how does the internet archive manage to not harm websites when they basically do the same thing?