Not what I'm meaning by path.
Take this album here:
https://bunkrr.su/a/kbAiJxtN
The album's url path is
/a/kbAiJxtN
There is a lovely photo in it:
https://i8.bunkr.ru/cute-cat-photos-1593441022-CAGLy7IT.jpg
the photo's url path is simply:
/cute-cat-photos-1593441022-CAGLy7IT.jpg
The url path is anything that comes after the domain, which in this case is bunkrr.su. For almost all downloads, these paths are static, and never change. If the program sees that path, and downloads it, it marks that path as complete, with the filename that it downloaded as. If it runs across that path again it can look it up and it'll see "Oh I've downloaded that file already, and I called it X."
The program itself won't stop scraping it, it won't know if it's seen something before until it gets to that look up.
What you are asking for is for it to skip the scraping portion of it. Which it can't do. But you can. Instead of giving the program
https://simpcity.su/threads/example-thread
you can give it:
https://simpcity.su/threads/example-thread/page-5
and it'll start scraping from page 5 or
https://simpcity.su/threads/example-thread/page-5#post-1314461
you can give it a specific post, and it'll only scrape from that post onwards.
That's why
--output-last-forum-post
exists and is an argument you can use (or set in the config file). It'll create a new text file after it finishes scraping of the last post it scraped in a forum thread, so you can just replace the old urls.txt file and have it continue from where it ended the previous run.
Again I'm stressing as long as you don't tell it to ignore history and you don't delete the download history file, it won't repeatedly redownload things with the same url path (see the start of this reply). Some sites don't provide a static url path though, anonfiles for example doesn't, and those files can sometimes redownload. You can move the files and it won't redownload them unless you tell it to ignore history, or delete the history file.