yeah exactly, at that point if it works with no limiter then we'll just see how long that'll last hehehe.
Oki, so if I understand correctly, previous DB code was a cesspit so you exterminated it, and made a new one that is much faster. So we still have the download history function to not redownload the same files twice from the same album BUT the old DB had to be scraped so we are starting from scratch in regard to our downloads history. Is that all correct?
Awesome! And yea true you mentioned that before. Sounds tricky as hell. I guess you'll have to find out how many request you can do within X amount of time before it hits a threshold that triggers the DDoS protection? And then play around to see how far you can optimize that so it's stable while getting as much as you can? I'm no expert though of course. But yeah wtv it'll take doesn't sound simple.
Path is misleading here. It's not filepath, it's url path.
https://jpg4.su/img/img-20220511-042012.E4t0Kt
img/img-20220511-042012.E4t0Kt