MEMORIAL DAY - JOIN BRAZZERS FOR FREE - CLICK HERE!
Guide - Onlyfans Downloading - A complete guide for PC and Mobile | Page 98 | SimpCity Forums

Guide Onlyfans Downloading - A complete guide for PC and Mobile

X34

Simp Dawg
Mar 10, 2022
17,885
6
2,243,692
2,863
0fya082315al84db03fa9bf467e3.png

Assistance


If you are looking for help with any of the methods listed you MUST say what method/script you are using.
There are multiple scripts and people keep not saying which one they are using which means people cant give help without asking first
 

roadrunner47

Bathwater Drinker
Mar 13, 2022
73
2,765
1,239
0fya082315al84db03fa9bf467e3.png
Is there a minimum system requirement for getting any of the DRM methods to work? I think i've got a couple of old laptops that run Windows 7/8 or will I need to install a recent version of windows on my Mac?

Edit - Will probably be using OF-DL
 
Last edited:
  • KEKWlaugh
Reactions: Kerry2dope

HereForTheLoot

🔞 ¯\_(ツ)_/¯ 🔞
Mar 11, 2022
336
32,488
1,740
0fya082315al84db03fa9bf467e3.png
Oops my post from yesterday get removed for not mentioning I was using OF-DL 1.7.36. Was wondering if OL-DL sometimes sends too many requests resulting in errors and the exe exiting out. Is there are any plans for a rate limiter in the options? I don't mind running the scrapper and just letting it run for a long time.
Would hate to have OF flag an account for scrapping too fast and risk a ban
 

sim0n00ps

Bathwater Drinker
Mar 11, 2022
783
23,046
1,574
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
The process that calculates the total scrape size is the contributing factor to that, I am in the process of making it so accounts with like 2000+ posts don't calculate the total scrape size and instead the progress will just be the total amount of posts its downloading. Making 2000+ requests just to get the file sizes is an expensive operation to run and is probably the main reason why you get rate limited.
 
  • 100
Reactions: HereForTheLoot

X34

Simp Dawg
Mar 10, 2022
17,885
6
2,243,692
2,863
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
TBH, your script did run better and was faster when it didn't calculate size, it could just give progress in number of files done out of how many removing the entire size scrape process and reducing chance of rate limit.
I noticed the datawhores scraper calculates size as it scrapes so its more a running count of how much has been downloaded, that script runs much faster and I've never had a rate limit issue with it.
 

sim0n00ps

Bathwater Drinker
Mar 11, 2022
783
23,046
1,574
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
The initial few versions of my script didn't calculate the total scrape size at all and I also remember it being much faster, thinking about it now it might be the best thing to do by removing it, I would rather the script spend more time downloading content rather than calculating the total scrape size.
 

xtramuffin

ᴡᴀᴛᴇʀ ᴅʀɪɴᴋᴇʀ
Mar 11, 2022
336
20,075
1,558
0fya082315al84db03fa9bf467e3.png
What about us who like to have scrape size to decide whether to scrape or not. Is there a possible way for you to do both? so we can choose between these two options
 

HereForTheLoot

🔞 ¯\_(ツ)_/¯ 🔞
Mar 11, 2022
336
32,488
1,740
0fya082315al84db03fa9bf467e3.png
As a data hoarder I have to agree with making it a setting. Couldn't care less for the size and would rather have the speed.

Thanks for all your hard work sim0n00ps - your program has been a godsend 🙏
 

blackstallion36

Bathwater Drinker
May 9, 2022
97
3,491
1,242
0fya082315al84db03fa9bf467e3.png
Hi sim0n00ps.

Is there anyway to force OF-DL v1.7.36 to redownload files that had been previously downloaded? I imagine it would involve editing the user_data.db in the Metadata folder but I'm unsure on how to proceed.

My apologies if this question has been asked and answered before.
 

Applejuice

Lord Of The Juice 🧃
Mar 11, 2022
863
116,257
2,518
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Rename these files accordingly:
Original nameNew name
client_id.bindevice_client_id_blob
private_key.pemdevice_private_key

Make sure to change the extension, this should happen when you rename the files.
Check to see if the file type has changed to just "File" instead of a BIN or PEM file.*
imageddca91dc6d8f83c8.png

*Note that this is a comparison, you will only have the two files (device_client_id_blob and device_private_key)
 

tw1st3d

( • )( • )
Mar 10, 2022
512
32,104
1,758
0fya082315al84db03fa9bf467e3.png
hey sim0n00ps has something changed on onlyfans? Getting the below error also on posts if I get past the login

auth.json located successfully!
config.json located successfully!
yt-dlp.exe located successfully!
ffmpeg.exe located successfully!
mp4decrypt.exe located successfully!
device_client_id_blob located successfully!
device_private_key located successfully!
Logged In successfully as
Exception caught: Response status code does not indicate success: 400 (Bad Request).

StackTrace: at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at OF_DL.Helpers.APIHelper.BuildHeaderAndExecuteRequests(Dictionary`2 getParams, String endpoint, Auth auth, HttpClient client)
at OF_DL.Helpers.APIHelper.GetAllSubscriptions(Dictionary`2 getParams, String endpoint, Auth auth)
Exception caught: Object reference not set to an instance of an object.

StackTrace: at OF_DL.Program.DownloadAllData()
at OF_DL.Program.Main()

Update:

I've logged out and back in and tried scraping again.

Getting Paid Posts
Found 0 Paid Posts
Getting Posts
Exception caught: Response status code does not indicate success: 400 (Bad Request).

StackTrace: at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at OF_DL.Helpers.APIHelper.BuildHeaderAndExecuteRequests(Dictionary`2 getParams, String endpoint, Auth auth, HttpClient client)
at OF_DL.Helpers.APIHelper.GetPosts(String endpoint, String folder, Auth auth, Config config, List`1 paid_post_ids)
Found 0 Posts
Getting Archived Posts
Found 0 Archived Posts
Getting Stories
Found 0 Stories
Getting Highlights
Found 0 Highlights
Getting Messages
Found 97 Messages

Downloading 97 Messages ---------------------------------------- 100% 510.0 MB 00:00:00

Messages Already Downloaded: 97 New Messages Downloaded: 0
Getting Paid Messages
Found 0 Paid Messages

████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
■ Paid Posts 0 ■ Posts 0 ■ Archived 0 ■ Stories 0 ■ Highlights 0 ■ Messages 97 ■ Paid Messages 0

Scrape Completed in 1.98 minutes
Select Accounts to Scrape | Select All = All Accounts | List = Download content from users on List | Custom = Specific Account(s)
 
Last edited:

jijios

Casual
Mar 11, 2022
33
24
72
0fya082315al84db03fa9bf467e3.png
Help pls
at sim0nn00ps
I've copied client_id.bin and private_key.pem to the cdm\devices\chrome_1610

I get this message
Exception caught: The type initializer for 'WidevineClient.Widevine.CDM' threw an exception.

StackTrace: at WidevineClient.Widevine.CDM.OpenSession(String initDataB64, String deviceName, Boolean offline, Boolean raw)
at WidevineClient.Widevine.CDMApi.GetChallenge(String initDataB64, String certDataB64, Boolean offline, Boolean raw)
at OF_DL.Helpers.APIHelper.GetDecryptionKeyNew(Dictionary`2 drmHeaders, String licenceURL, String pssh, Auth auth)

Inner Exception:
Exception caught: No client id blob found

StackTrace: at WidevineClient.Widevine.CDMDevice..ctor(String deviceName, Byte[] clientIdBlobBytes, Byte[] privateKey