I know this is a longshot but I am looking for some help creating a script that will test a possible web address to see if it is a valid page and then drop the valid links in a file for me to then grab content from. Basically, I know that there is a website of URL+random six character string that has content. So can someone help me figure out how to iteratively test all possible variables of that URL-"random six character string" and then write out the one that isn't 404 to a file so I can then grab the content from it (I know there is no password protection or anything past the random six character string)?