johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Requirements:
Python 3.10+

Please, Log in or Register to see links and images

An asynchronous file archival program

The intention of Ripandtear is to make it easy to save/update content uploaded by content creators online. What makes Ripandtear unique is that it stores all information on a user in a .rat file to help condense all the information about a content creator into one location (don't worry, a .rat is just a .json file with a different extension name). From what usernames they use across different websites, to tracking already downloaded URLs and even file names with their MD5 hash to remove duplicates, it's all stored in the .rat for easy convenience when adding and creating new information from the command line. You can store all the information about a content creator, archive their content, remove duplicates and sort the files all with one command!


By using the .rat file you eliminate the need for re-downloading the same content and creating duplicate files. All the previously downloaded url's are tracked in the .rat file. If the file has already been downloaded, it is skipped to save you time/data/bandwidth and make fewer requests to servers. This also makes it convenient if you share a folder that has a .rat file with someone else. They can pick up where you left off without having to download the same content all over again!

Anyways that is the elevator pitch from the README. I tried to write a pretty extensive instructions on how to use the program with lots of examples so feel free to
Please, Log in or Register to see links and images
if you want to learn more.

Motivation

Years ago the internet was a much simpler time. You had Reddit girls that thrived off attention, cam girls trying to make as much money as possible, tumblerinas with daddy issues trying to get revenge on their fathers that were never around by posting nudes online. It was a perfect era to fulfill my addiction. Since I am posting on this site I am sure you are all thinking I am talking about a porn addiction, but that is not the case (ok, maybe a little). My primary addiction is hoarding data and organizing said data. Content creators self segregating themselves made my life very easy and I was happy. However, in the recent years this has all changed. With the rise of Onlyfans and the vast amount of money girls realize they can now make, their inner capitalist has come out and they have embraced diversification. No longer does a Reddit user stick to Reddit. Her Reddit exists to drive users to her Onlyfans, she uses Redgifs to post videos and advertise her PPV content, updates her Twitter to stay in contact with fans and maybe even has a Pornhub. Even worse is she might use different usernames for each account, or she is forced to if one account gets banned. If you only have 100-200 users saved on your computer you might be able to remember each one of her usernames and still categorize them in main website directories (Reddit, Tumblr, Onlyfans, etc).

However when your collection surpasses 1,500 Reddit users alone this becomes harder to do. This was one of the driving factors behind Ripandtear. Instead of organizing content based on sites, I have changed to collecting content per user instead. To track all the information about a user I created this program. Ripandtear. It uses the .rat file (it's just a .json file with a different extension name) to store all information I (currently) want to track. Within the .rat I can track name(s) a creator uses on specific websites, simpcity links to quickly see if people have posted new content, track url's of media I already downloaded to prevent downloading it again and most importantly download the content itself. Since I primarily use my terminal when interacting with my computer I wanted to be able to create and update users via the command line and do it quickly. I feel that I have done a good job accomplishing these goals with this first version. With one command and a few flags I can quickly log information and download all the content a user has with entering 2 commands into my terminal. One to make the folder and the second to tell Ripandtear what I want it to do.

One of the biggest features I think Ripandtear has is the use of the .rat file to track url's and downloaded files. Instead of having to download a folder with content, take the contents of said folder and copy it into the specific content creators folder, then running a separate program to remove duplicates, then run another program to sort the files, then accidentally downloading the same content again because another user reposted the same link without you realizing it and forcing you to go through the whole process all over again, Ripandtear deals with all of this for you. It tracks the urls you have downloaded so if you try to download the same content twice it skips it. It has the ability to sort files into their respective categories (pics, vids, audio, text), it keeps track of file hashes to remove duplicate content and currently it supports the majority of websites that people post on here (and I have intentions to expand the list). Since all that information is stored in the .rat file it can easily act like a checkpoint. If people include their .rat file in uploads, whoever downloads and uses it can simply pick up where the uploader left off. They no longer have to download the same content over again. This also means going the extra mile to clean up duplicates that slip by, content that people aren't interested or just bad content in general won't just benefit you, but everyone you share your .rat file.

Anyways, I feel I am rambling a bit. I mainly made this project for myself, but I have found it to be a huge boon for me and my collection. It has helped me catalog and speed up working on my collection so much that I didn't want to keep it to myself. I will admit that it might feel rigid to some since it it how I work with and save my content, maybe it could seem a little intimidating with all the different command and it still being a work in progress, but I am currently looking to improve on it. However, If you are a power user that shares a similar philosophy to organizing content and really wants to take managing that content to the next level I hope you find some use from this project.

Here is the current template of the .rat:

JSON:
Please, Log in or Register to view codes content!
 
Last edited:

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Just updated Ripandtear to download, add and sync *Blacklisted site* users/videos. If you have already downloaded feel free to upgrade

(Linux/Mac)
pip install --upgrade ripandtear

(Windows)
python -m pip install --upgrade ripandtear
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Currently -c will only store the coomer.party url in the .rat file. You could print out the link with -pc. I haven't currently added an extractor to download coomer.party links, but it was one of the next sites that I was planning on writing an extractor for as it should be fairly simple from the poking around I have done. I have spent the past few days re-writing a lot of the program to make it look better (based off inspiration from cyberdrop-dl), making it more efficient and fixing bugs. I just finished all of that and should be releasing the new version tomorrow. After that I will get back to writing extractors for the common hosting sites that are used on simpcity.

If you want to know all of the sites that you can currently download from, check out the 'website quirks' section at the bottom of this
Please, Log in or Register to see links and images
. I knew that I wanted to add the ability to download coomer links in the future, but it was slightly lower on my priorities list when compared to other features, so I added a way to store that information for the future. My goal is that for any .rat file that has a coomer link saved, you can go into the folder and run ripantear -sc to sync the posts that coomer has. It will find all coomer links that are stored in the .rat, then download all the content that you haven't gotten yet by checking the .rat file. If you don't have a .rat file (and/or you don't want one) then you will just have to run ripandtear -d <coomer_link> and it will download everything it finds, without saving the url's to a .rat. I am hoping to have the coomer extractor done within 3-4 days.

My ideal situation that I am working towards is to either write a companion program, or integrate this feature directly into ripandtear, where you can tell it the root folders where you store all subfolders that have .rat files. When you tell it to, it will look for all folders that have .rat files based off the those root folders, move to the folder with the .rat and then sync all the content, only downloading what you haven't downloaded yet based of the information stored in the .rat file. The goal (mainly for myself, but also others) is to be able to schedule a task every night for ripandtear (or the companion program) to run and update all the content that content creators had posted that day. That way you will only have to download a little bit each night from each creator and are always up to date with their uploads (and possibly getting all content before they delete stuff). Because this is my end goal I added the ability to save coomer and simpcity links today, so you don't have to go back tomorrow to add them. They don't have extractors to download the content yet, but I am planning on writing them in the future.
 
Last edited:
  • Like
Reactions: froppalo

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Just pushed a new update.

- Changed the output from ugly print statments to a beautiful colored display (all credit goes to Jules and cyberdrop-dl for the inspiration)

- Added a -mk flag for a huge quality of life improvement. You can now use -mk <name> to make a directory called <name> and then ripandtear will automatically move into the new directory and run all further flags. After it is done running, it returns back to the original directory -mk was run from. Now you don't have to take the time to enter a separate command to create/move into the new directory

- Fixed lots of bugs to provide better consistency when downloading

- Limited Bunkr video downloads to two at a time to prevent 429 staus code errors. Downloads will be slower, but much more consistent

froppalo I am going to start on the coomer.party extractor now. It will be the next thing uploaded. I should have it done in the next few days.
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
froppalo Just finished writing the coomer extractor and uploading it. If you update ripandtear you will have access to it.

-d - will download a coomer link (profiles, content pages and direct links to pictures/videos)
-sc - syncs all coomer links that you have stored in a .rat (they need to be full urls)

Coomer seems to be a little finicky with how many downloads you can do at the same time. To prevent a bunch of 429 error codes (Too Many Requests) I limited the downloads for coomer to 4 at a time. From the bit of testing I have been doing it seems like sometimes downloads will still get blocked with a 429 code. Ripandtear stores those failed downloads under errors within the .rat. If you download a profile and see that you had a bunch of files fail, wait a few seconds after the download completes and run ripandtear -se to reattempt just the failed downloads. For me I am able to immediately download them with no problem. This is just something I might need to tweak going forward to get it to 100%. I have been getting ~85% success rate on the first past. After syncing the errors it hits 100%.

I feel I have coverage for about 85-95% of use cases for coomer, but I didn't run into too many edge cases that would cause problems. If you run into any let me know and I will try to make tweaks to get better coverage.
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Could you send me the link? You could also attempt the download with ripandtear -d <link> -l 2 to have it print information about what it is doing to the screen. If you want super detailed information you can do -l 1. It might give you some hints about what is going on. If I had to guess the url of the content is a certain pattern that I haven't accounted for yet.

Bunkr is the one site that has a bunch of edge cases that prevent ripandtear from working. I already have 4 different links opened in my browser that I need to go through and account for to make sure they are downloaded.
 

froppalo

Tier 3 Sub
Mar 17, 2022
46
285
379
0fya082315al84db03fa9bf467e3.png
is it ok if I ask questions about ripandtear here? Do you prefer DMs? I would also understand if you'd rather not receive questions at all 😅

I'm wondering how you are actually using the program. Do you keep a single .rat file or you create multiple ones for each site or each model?

And looking at the documentation I can see how to set a username for reddit and redgif. Is that possible to,somehow, create a username and then add all kind of links and sources under that link?

Let's say I'm following model1: she has a reddit account, but then I find lots on content on bunkr and coomer. It could be interesting if I could store links like this:

Code:
Please, Log in or Register to view codes content!

I hope it makes sense what I mean

And once again, thanks for this!
 
  • Like
Reactions: johnny.barracuda

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
I would say if it is a general question, like you asked, feel free to ask here. Other people might have the same question and it could help them. If it is something where you might need to send me links to content you are having problem downloading and you don't want everyone on the site knowing what you are jerking off to, them feel free to send me a PM. I have no problem helping out anybody if they have questions. I am not a professional programmer so all I ask is that you be a bit patient with me if I am helping trouble shoot as it's my first time as well.

The idea behind the .rat file is that it keeps track of all content that passes through the folder it is in. I personally use it as 1 .rat file per folder per girl/model/content creator/etc. Technically you could use it to keep track of anything, but the idea for it is individual models. For example if you went to the exploitedcollegegirls thread and downloaded all the bunkr links using ripandtear into an 'exploitedcollegegirls' folder that had a .rat file, the .rat file would keep track of all the links downloaded from bunkr. That way if you attempted to download the same link again, it would just skip it saving you time and bandwidth

Please, Log in or Register to view quotes

Yep. That is 100% how it works. If you wanted to add bunkr links, coomer post links or any link with downloadable content you would just add it under the 'urls_to_download' section with the -u flag. Any link that has content you want to download should be added under there. gofile, bunkr, *Blacklisted site*, cyberdrop.me, jpg.fish, etc. If you run ripandtear -su that tells ripandtear to go through all the urls_to_download within the .rat file that you want to be downloaded and it attempts to download all the links that it finds an extractor for. If an extractor doesn't exist, it tells you and moves on to the next link. I might not have an extractor for the site now, but I might have one in the future so feel free to add it (that's what I do so I don't have to come back and add urls later). When I make an extractor for the link, the next time you run -su it will be downloaded.

I have data caps on my internet so there are times where I want to download a bunch of stuff, but I don't want to download it now and go over my data cap and pay extra money. I store the links I want to download with -u and then when a new month hits and I can afford to download the content I go back to the folder with the .rat and run ripandtear -su (sync urls_to_download) so ripandtear will go and download everything I have saved.

The coomer and simpcity under the links section is intended for sites that will always have new posts/content added to them. This works for coomer now, and I am planning on doing it to simpcity in the future. Example: you add belledelphine's coomer profile to the .rat with ripandtear -c 'https://coomer.party/onlyfans/user/belledelphine'. Then you run ripandtear -sc to sync the coomer profile(s). Ripandtear looks at all of the links to profiles that are stored, it then finds every post on every page under the profile(s). It then finds the links to all content under every post. It then downloads all of the links it finds and saves the links in the .rat file for record keeping. It's a month later and you run ripandtear -sc again. This time while looking at posts it only adds posts that it hasn't recorded yet. If it doesn't have a record for the post/content it downloads it. If it does have a record that it downloaded it already, it skips the post. That way you just download what is new, not a bunch of duplicates

The command line flags to add information just appends information. It also checks for duplicates so you don't have to worry about adding identical information. If in doubt just add (if you mess up by adding a wrong name you will have to manually remove the entry from the .rat). Here is an example of a folder and .rat I have for belledelphine. Lets say I find out she also has a reddit account. All I need to do is go where here .rat is stored and run ripandtear -r <reddit_name> and it will add the reddit name to the reddit section under names. I could then download all unrecorded reddit content with ripandtear -sr (sync reddit). Currently ripandtear only adds information so you don't have to worry about accidentally deleting anything

Code:
Please, Log in or Register to view codes content!

JSON:
Please, Log in or Register to view codes content!
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
froppalo Here is an example of how I use ripandtear

-mk - make a directory with '<name>' and then go into it (and run the following flags)

-r - adds reddit username

-o - adds onlyfans username

-s - stores simpcity link so I can download it later when I create an extractor

-i - stores instagram name

-u - save the following urls so I can download them later by running ripandtear -su

-sr - sync reddit content (running it this first time downloads all reddit content)

-H - hash files and remove duplicates

-S - sort files into the correct directory.

This is the result:

nude-eva/
├── nude-eva.rat
├── pics/ (197 pics)
└── vids/ (73 vids)



Please, Log in or Register to see links and images
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
froppalo I forgot to add her redgifs profile. All I do is run this command

cd nude-eva

ripandtear -R 'evaporshe' -sR -H -S

-R - add her redgifs username to the .rat (create a .rat if one doesn't exist)

-sR - sync Redgifs

-H - Hash and remove duplicates

-S - sort files
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
No, but looking at the tags on redgifs it looks like it might be easy to implement. Could you either post a link to what you are talking about here or PM the link to me so I know we are talking about the same thing? I might be able to add that functionality today if you do.
 

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Just implemented it and uploaded the new version. You can download ripandtear with pip. Re-read the install instructions if you have already read them once before.

Make sure you run

playwright install

after downloading ripandtear. Playwright is a web development tool that launches a web browser in the background. Ripandtear uses playwright to load javascript from redgifs to be able to scroll down the page to find all the available videos.
 
  • Hyperz
Reactions: Mingor

johnny.barracuda

Bathwater Drinker
Sep 26, 2022
220
2,988
1,252
0fya082315al84db03fa9bf467e3.png
Made a semi breaking change to ripandtear. It only applies from 0.9.6 and onwards. Previously I used a comma ( , ) as the separator between urls to download and names. That created a conflict with redgifs so I have changed it to a pipe ( | ).

Here is how it looks now:

Previously:

ripandtear -d 'https://www.redgifs.com/gifs/lexi-luna,cumshot?order=new,https://www.redgifs.com/watch/masculineupbeatlangur#rel=user%3Aoutrageousone;order=new'

Currently:
ripandtear -d 'https://www.redgifs.com/gifs/lexi-luna,cumshot?order=new|https://www.redgifs.com/watch/masculineupbeatlangur#rel=user%3Aoutrageousone;order=new'

Adding names to a .rat:

Previously:

ripandtear -r 'test,name'

Currently:

ripandtear -r 'test|name'
 
  • Like
Reactions: Mingor

glandalf

Superfan
Mar 14, 2022
34
675
832
0fya082315al84db03fa9bf467e3.png
Really like your tool johnny.barracuda, I'm kinda annoyed I've only just found it to be honest lol.

Got a couple of suggestions of things you might consider implementing.

1) The ability to pass a specific path into the tool, instead of having have the directory you want to work in as your current working directory. E.g. ripandtear --directory="/path/to/folder -H -S"
2) The file sorter util seems to be case sensitive and is only looking at extensions that are lowercase. E.g. it will sort video.mov into vids folder, but not video.MOV