Guide Installing Stable Diffusion Webui & Nudifying (Inpainting) Guide

  • Hi all, there are some hacked accounts posting viruses within zip links on gofile.
    Please report these posts when you see them.
    DO NOT reply to them, this just makes it harder for us to clean up and may get your account banned by mistake.
  • Welcome to the Fakes / AI / Deepfakes category
    When posting AI generated content please include prompts used where possible.

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png

DISCLAIMER: This is still quite bleeding edge and technical, there are probably cases / errors that are not included in this guide, and can cause you to waste time and effort without any success.​


0. Introduction​

This guide will provide steps to install AUTOMATIC1111's Webui several ways.

After the guide, you should be able to go from the left image, to the right,

Please, Log in or Register to see links and images


Recommended Requirements:
Windows 10/11, 32 GB of RAM and a Nvidia graphics card with at least 4-6 GB VRAM (not tested by me, but reported to work), but the more the better.

For AMD GPU cards check out
Please, Log in or Register to see links and images



Non-GPU (not tested, proceed at own risk :pauseChamp: )
It does work without a Nvidia graphics card (GPU), but when using just the CPU it will take 10 to 25 minutes PER IMAGE, whereas with a GPU it will take 10 - 30 seconds per image. (images with a resolution of < 1024px, for larger pictures, it takes longer)
Windows 10/11, 64 GB of RAM. It needs a lot more RAM to be able to do this without a graphics card (my guess).


Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

2. The WEBUI!​


Wait until you see

Code:
Please, Log in or Register to view codes content!

Then go to
Please, Log in or Register to see links and images
in any browser.

You should see something similar to this:

Please, Log in or Register to see links and images


The 'hardest' part should be done.

3. Inpainting model for nudes​

  • Realistic Vision InPainting
    Go to
    Please, Log in or Register to see links and images
    (account required) and under Versions, select the inpainting model (v13), then at the right, download "Pruned Model SafeTensor" and "Config".

  • Uber Realistic Porn Merge
    Go to
    Please, Log in or Register to see links and images
    (account required) and under Versions, click on 'URPMv1.3-inpainting', then at the right, download "Pruned Model SafeTensor" and "Config".
Save the model and config file to the "models/Stable-diffusion" (native) or "data/StableDiffusion" (docker) folder in the Webui project you unzipped earlier, they should have the same name, except the extension.

4. Embeddings​


Add the following files in the 'data/embeddings'

Please, Log in or Register to see links and images

Please, Log in or Register to see links and images


These allow you to add 'breasts', 'small_tits' and 'Style-Unshaved' in your prompt, and provide better quality breasts / vaginas. The first one is more generalized, the latter is well.. yes.


5. Loading Model​


In the webui, at the top left, "Stable Diffusion checkpoint", hit the 'Refresh' icon.

Now you should see the uberRealisticPornMerge_urpmv13 model in the list, select it.


6. Model Parameters​


Go to the 'img2img' tab, and then the 'Inpaint' tab.

In the first textarea (positive prompt), enter

RAW photo of a nude woman, naked​


In the second textarea (negative prompt), enter

((clothing), (monochrome:1.3), (deformed, distorted, disfigured:1.3), (hair), jeans, tattoo, wet, water, clothing, shadow, 3d render, cartoon, ((blurry)), duplicate, ((duplicate body parts)), (disfigured), (poorly drawn), ((missing limbs)), logo, signature, text, words, low res, boring, artifacts, bad art, gross, ugly, poor quality, low quality, poorly drawn, bad anatomy, wrong anatomy​


If not otherwise mentioned, leave default,

Masked content: fill (will just fill in the area without taking in to consideration the original masked 'content', but play around with others too)​
Inpaint area: Only masked
Sampling method: DPM++ SDE Karras (one of the better methods that takes care of using similar skin colors for masked area, etc)​
Sampling steps: start with 20, then increase to 50 for better quality/results when needed. But the higher, the longer it takes. I have mostly good results with 20, but it all depends on the complexity of the source image and the masked area.​
CFG Scale: 7 - 12 (mostly 7)​
Denoise Strength: 0.75 (default, the lower you set this, the more it will look like the original masked area)​


These are just recommendations / what works for me, experiment / test out yourself to see what works for you.

Tips:
  • EXPERIMENT: don't take this guide word-by-word.. try stuff
  • Masked contentoptions:
    • Fill: will fill in based on the prompt without looking at the masked area
    • Original: will use the masked area to make a guess to what the masked content should include. This can be useful in some cases to keep body shape, but will more easily take over the clothes
    • Latent Noise: will generate new noise based on the masked content
    • Latent Nothing: will generate no initial noise
  • Denoising strength: the lower the closer to the original image, the higher the more "freedom" you give to the generator
  • Use a higher batch count to generate more images in one setting
  • Change masked padding, pixels this will include fewer/more pixels (e.g. at the edges), so the model will learn how to "fix" inconsistencies at the edges.

7. Images​


Stable Diffusion models work best with images with a certain resolution, so it's best to crop your images to the smallest possible area.

You can use any photo manipulation software, or use
Please, Log in or Register to see links and images


Open your preferred image in the left area.


8. Masking​


Now it's time to mask / paint over all the areas you want to 'regenerate', so start painting over all clothing. You can change the size of the brush at the right of the image.

Tips / Issues:
  • Mask more, go outside of the edges of the clothes. If you only mask clothes, the generator doesn't know about any skin colors, so it's best you include parts of the unclothed body (e.g.. neck, belly, hands, etc).
  • And sometimes: you forgot to mask some (tiny) area of the clothes.. this is esp. difficult to find with clothes that are dark. But look at the "in progress" images that are generated during the process.. you should normally see where it is.

9. Inpainting, finally!​


Hit generate. Inspect image, and if needed, retry.

10. Optimizations​


When the model is generating weird or unwanted things, add them to the 'Negative Prompt'. I've already added some, like hair, water, etc. In some cases you need to add 'fingers', since Stable Diffusion is quite bad at generating fingers.

Sometimes it's needed to add extra words to the (positive) prompt, i.e. sitting, standing, ..

11. Additional Tips​

  • When asking for help, don't forget to add a screenshots of the console output, your masked image (do not post private images) and the model parameters.
  • Whenever you see 'OOM (Out of Memory)' in the console output, it most of the time means that you do not have enough (graphics) VRAM to run it.
  • You should be able to use 'Docker Desktop' to start and stop the container from now on. When not in use, it's best to stop the container, since it keeps a 'hold' on a lot of RAM and (graphics) VRAM.
  • Also check that the width & height in the model parameters are close / equal to that of the source image, or leave it at the default: 512x512.
  • Keep an eye on your RAM usage, depending on the amount, it's possible you will run out of RAM, and Windows will probably get very sluggish since it needs to swap constantly.
  • More parameter tweaks can be found in this guide (external):
    Please, Log in or Register to see links and images
  • Now that you have this installed, also check out the 'text to image' and prompting guide by dkdoc at https://simpcity.su/threads/overall-basic-guide-to-using-stable-diffusion-webui.137162/
  • Discord at
    Please, Log in or Register to see links and images
    , this Discord has links to interesting embeddings and models in the 'Resources' channels
 
Last edited:

Gosunator

Diamond Tier
Mar 14, 2022
23
964
1,037
0fya082315al84db03fa9bf467e3.png
So I don't want this running all the time. How would I shut it down and start it back up next time?

This works really well by the way.
 
  • Like
Reactions: Asmrlover4557

Android18+

Lurker
Mar 12, 2022
2
15
213
0fya082315al84db03fa9bf467e3.png
Works very well, thank you. The only thing I don't like is that there is not much variety for boobs and vaginas... or I'm just too stupid to make small boobs.
 
  • Like
Reactions: Asmrlover4557

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Yeah, it's hit-or-miss.. you can tweak it a bit by using the 'embeddings' mentioned in the guide.

There is also
Please, Log in or Register to see links and images
, try adding it at he front of the prompt for better priority, or you can use ((Style-Unshaved)) to give it more focus.
 
Last edited:

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
If this is after a clean install, it seems this is a recent bug in the AUTOMATIC1111 repository. More info:
Please, Log in or Register to see links and images


Since this guide is based on docker, at initial startup, it will retrieve all the code from the AUTOMATIC1111 repository at that time.

Hopefully this will be a temporary issue and be fixed by the developers behind AUTOMATIC1111.

I've updated the guide in the "Bugs" section with a temporary fix until AUTOMATIC1111 fixes it.

You can try the following if you are comfortable with CMD / PS:

Bash:
Please, Log in or Register to view codes content!

However, this is not tested.
 
Last edited:
  • Like
Reactions: xJillx and CR1

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Then it's best to wait until they fix the issue. Afterwards you can delete the container in Docker Desktop, and then redo the steps (ex. the docker desktop install).

InvokeAI is a whole other web application / repository, and this guide does not cover that one.
 
  • Like
Reactions: CR1

onewizzzy

Lurker
Feb 6, 2023
3
15
148
0fya082315al84db03fa9bf467e3.png
for first timers like me, i feel this guide is incomplete, i had to go to docker's settings->resources then file sharing to enable file sharing with the folder where I extracted the webui zip to get the download up part to work
but first things first (for those who don't have wsl installed or up-to-date)
I would have had to either enable Hyper-V in Window's Program&Features (its not on by default)
OR
go the long route and install/update WSL to 2.0 (which is quite difficult for the technically challenged like me)
I ended up choosing hyper-v (had to reinstall docker)

i'm at the docker compose --profile auto up --build part and cmd is throwing this error at me:
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown

based on a few google results that I can somewhat understand, i'm downloading nvidia cuda driver and hoping this error is fixed but oooooof...idk then I presume I would have to do some sort of command related to nvidia-docker2 -install which is another rabbit hole like holy i'm so lost

edit:dl'd cuda ,, still threw the same 2 errors i'm stuck but man this was a frustrating huge waste of bandwidth and time
i missed it but there was another error beside nvidia container error:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'

maybe throw in a guide disclaimer to warn windows users that this uses linux distros and etc and if theres even one error then its a huge hurdle to push through, i lost but honestly thought it would be a plug-n-play experience from how simple the guide looked on the surface lol...
hope OP at least considers adding the disclaimer+editing docker settings for future readers
 
Last edited:

RareSight

Tier 2 Sub
Mar 12, 2022
8
239
433
0fya082315al84db03fa9bf467e3.png
The process randomly stopping at like ~91% sometimes. Is that because I'm running out of VRAM?
 

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Unfortunately, you won't see many ppl doing this for free, since else everybody would request embeddings.

Also, you will probably won't find many ppl making embeddings for private individuals.

There are some ppl with a Patreon (or similar) where you can request models / embeddings / lora, but you have to pay for it.
 
  • Like
Reactions: AgingCell816

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes

These are not endorsements of any kind, check their previews / samples, amount of donations, etc. before you pay any money.

Please, Log in or Register to see links and images
($20 / custom embed)
Please, Log in or Register to see links and images
($15 / custom embed)
Please, Log in or Register to see links and images
($20 / embed)
Please, Log in or Register to see links and images
($20 / embed)
Please, Log in or Register to see links and images
($ 20 / model)

Most embeddings can be created with a little work, collecting images, preprocessing them and then training. AFAIK with 6 GB of VRAM you can train these yourself. A couple of hours work for anyone with some technical know-how and willingness to learn 🚀
 
Last edited:

lemonypie

Fan
Mar 15, 2022
6
60
325
0fya082315al84db03fa9bf467e3.png
When I try to preprocess images, I can't seem to get the source and destination directory correct. Nothing happens and the images aren't run through. Any advice on the format?
 

softarchitect

Tier 3 Sub
May 16, 2022
55
416
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
For the source and destination directories, it's best to use /data as a root. I usually use /data/train/<project>/src and /data/train/<project>/dst as output.

If you followed the guide, the paths are "paths within docker", and only /data is one of the paths that is shared on your disk.

Depending on which training extension you used, the output could be inside of docker, and not shared / visible.

In that case you will need to go "inside" of your docker container:

Bash:
Please, Log in or Register to view codes content!