Guide Installing Stable Diffusion Webui & Nudifying (Inpainting) Guide

  • Welcome to the Fakes / AI / Deepfakes category
    When posting AI generated content please include prompts used where possible.

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png

DISCLAIMER: This is still quite bleeding edge and technical, there are probably cases / errors that are not included in this guide, and can cause you to waste time and effort without any success.​


0. Introduction​

This guide will provide steps to install AUTOMATIC1111's Webui several ways.

After the guide, you should be able to go from the left image, to the right,

Please, Log in or Register to see links and images


Recommended Requirements:
Windows 10/11, 32 GB of RAM and a Nvidia graphics card with at least 4-6 GB VRAM (not tested by me, but reported to work), but the more the better.

For AMD GPU cards check out
Please, Log in or Register to see links and images



Non-GPU (not tested, proceed at own risk :pauseChamp: )
It does work without a Nvidia graphics card (GPU), but when using just the CPU it will take 10 to 25 minutes PER IMAGE, whereas with a GPU it will take 10 - 30 seconds per image. (images with a resolution of < 1024px, for larger pictures, it takes longer)
Windows 10/11, 64 GB of RAM. It needs a lot more RAM to be able to do this without a graphics card (my guess).


Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

Please, Log in or Register to view spoilers

2. The WEBUI!​


Wait until you see

Code:
Please, Log in or Register to view codes content!

Then go to
Please, Log in or Register to see links and images
in any browser.

You should see something similar to this:

Please, Log in or Register to see links and images


The 'hardest' part should be done.

3. Inpainting model for nudes​

  • Realistic Vision InPainting
    Go to
    Please, Log in or Register to see links and images
    (account required) and under Versions, select the inpainting model (v13), then at the right, download "Pruned Model SafeTensor" and "Config".

  • Uber Realistic Porn Merge
    Go to
    Please, Log in or Register to see links and images
    (account required) and under Versions, click on 'URPMv1.3-inpainting', then at the right, download "Pruned Model SafeTensor" and "Config".
Save the model and config file to the "models/Stable-diffusion" (native) or "data/StableDiffusion" (docker) folder in the Webui project you unzipped earlier, they should have the same name, except the extension.

4. Embeddings​


Add the following files in the 'data/embeddings'

Please, Log in or Register to see links and images

Please, Log in or Register to see links and images


These allow you to add 'breasts', 'small_tits' and 'Style-Unshaved' in your prompt, and provide better quality breasts / vaginas. The first one is more generalized, the latter is well.. yes.


5. Loading Model​


In the webui, at the top left, "Stable Diffusion checkpoint", hit the 'Refresh' icon.

Now you should see the uberRealisticPornMerge_urpmv13 model in the list, select it.


6. Model Parameters​


Go to the 'img2img' tab, and then the 'Inpaint' tab.

In the first textarea (positive prompt), enter

RAW photo of a nude woman, naked​


In the second textarea (negative prompt), enter

((clothing), (monochrome:1.3), (deformed, distorted, disfigured:1.3), (hair), jeans, tattoo, wet, water, clothing, shadow, 3d render, cartoon, ((blurry)), duplicate, ((duplicate body parts)), (disfigured), (poorly drawn), ((missing limbs)), logo, signature, text, words, low res, boring, artifacts, bad art, gross, ugly, poor quality, low quality, poorly drawn, bad anatomy, wrong anatomy​


If not otherwise mentioned, leave default,

Masked content: fill (will just fill in the area without taking in to consideration the original masked 'content', but play around with others too)​
Inpaint area: Only masked
Sampling method: DPM++ SDE Karras (one of the better methods that takes care of using similar skin colors for masked area, etc)​
Sampling steps: start with 20, then increase to 50 for better quality/results when needed. But the higher, the longer it takes. I have mostly good results with 20, but it all depends on the complexity of the source image and the masked area.​
CFG Scale: 7 - 12 (mostly 7)​
Denoise Strength: 0.75 (default, the lower you set this, the more it will look like the original masked area)​


These are just recommendations / what works for me, experiment / test out yourself to see what works for you.

Tips:
  • EXPERIMENT: don't take this guide word-by-word.. try stuff
  • Masked contentoptions:
    • Fill: will fill in based on the prompt without looking at the masked area
    • Original: will use the masked area to make a guess to what the masked content should include. This can be useful in some cases to keep body shape, but will more easily take over the clothes
    • Latent Noise: will generate new noise based on the masked content
    • Latent Nothing: will generate no initial noise
  • Denoising strength: the lower the closer to the original image, the higher the more "freedom" you give to the generator
  • Use a higher batch count to generate more images in one setting
  • Change masked padding, pixels this will include fewer/more pixels (e.g. at the edges), so the model will learn how to "fix" inconsistencies at the edges.

7. Images​


Stable Diffusion models work best with images with a certain resolution, so it's best to crop your images to the smallest possible area.

You can use any photo manipulation software, or use
Please, Log in or Register to see links and images


Open your preferred image in the left area.


8. Masking​


Now it's time to mask / paint over all the areas you want to 'regenerate', so start painting over all clothing. You can change the size of the brush at the right of the image.

Tips / Issues:
  • Mask more, go outside of the edges of the clothes. If you only mask clothes, the generator doesn't know about any skin colors, so it's best you include parts of the unclothed body (e.g.. neck, belly, hands, etc).
  • And sometimes: you forgot to mask some (tiny) area of the clothes.. this is esp. difficult to find with clothes that are dark. But look at the "in progress" images that are generated during the process.. you should normally see where it is.

9. Inpainting, finally!​


Hit generate. Inspect image, and if needed, retry.

10. Optimizations​


When the model is generating weird or unwanted things, add them to the 'Negative Prompt'. I've already added some, like hair, water, etc. In some cases you need to add 'fingers', since Stable Diffusion is quite bad at generating fingers.

Sometimes it's needed to add extra words to the (positive) prompt, i.e. sitting, standing, ..

11. Additional Tips​

  • When asking for help, don't forget to add a screenshots of the console output, your masked image (do not post private images) and the model parameters.
  • Whenever you see 'OOM (Out of Memory)' in the console output, it most of the time means that you do not have enough (graphics) VRAM to run it.
  • You should be able to use 'Docker Desktop' to start and stop the container from now on. When not in use, it's best to stop the container, since it keeps a 'hold' on a lot of RAM and (graphics) VRAM.
  • Also check that the width & height in the model parameters are close / equal to that of the source image, or leave it at the default: 512x512.
  • Keep an eye on your RAM usage, depending on the amount, it's possible you will run out of RAM, and Windows will probably get very sluggish since it needs to swap constantly.
  • More parameter tweaks can be found in this guide (external):
    Please, Log in or Register to see links and images
  • Now that you have this installed, also check out the 'text to image' and prompting guide by dkdoc at https://simpcity.su/threads/overall-basic-guide-to-using-stable-diffusion-webui.137162/
  • Discord at
    Please, Log in or Register to see links and images
    , this Discord has links to interesting embeddings and models in the 'Resources' channels
 
Last edited:

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes

To be clear: DO NOT use the standard Stable Diffusion 1.5 Inpainting model.. it's useless, trash, garbage, etc. The Stable Diffusion 2.x models are even worse, since they removed a lot of NSFW training data.

You need to select the URPM inpainting model (at the top left), as mentioned in the guide. You can also use "Realistic Vision 1.3 Inpainting" model available on Civitai.com, that one is also good for nude inpainting.

Additionally, if you don't give the app enough masking room, it won't work.. it can only 're-generate' the masked area.. so you need to mask a big area to let the model regenerate what is behind the mask, and let it work its magic. If you only mask a very small area, like only a 'tight' bikini, it won't work that great.

There were issues with this image (due to the lighting or something, not really sure) that caused a 'color distortion / difference' below the masked area.. it's still there, but a bit less since it masked more of the body.. i think the best way to solve this is to either mask her total body, or use another Sampling method. Below a full screenshot, including parameters (very basic) and the source & dest..

Please, Log in or Register to see links and images


Result:

Please, Log in or Register to see links and images


In most cases you will never replicate the full breast structure of the source image, due to the problems I listed above. So it won't be 100% perfect.
 

9843yre23

Tier 3 Sub
Sep 7, 2022
11
332
465
0fya082315al84db03fa9bf467e3.png
Something that I have been doing recently that hasn't really been said yet (or I missed it):
Go bit by bit! When inpainting an image, try inpainting a single section with a specific prompt, then, pick the best output and upload it to inpaint another section and repeat. I find this to be effective in images with multiple subjects or "difficult" lighting, clothing, filters on top, etc.

Doing this not only allows you to be autistically specific with the prompt you can also change the denoising, cfg scale, etc. per section resulting in a better image than if you just used a single seed over the whole image, in my experience.
 

fafferfaffer

Tier 2 Sub
Oct 16, 2022
21
162
284
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
I need to find the original images to get the workflow, but for now I can just say that the main part is I use the ROOP extension for stable diffusion for faces. Honestly I cant go back to normal sort of generated girls or faces now after using it.
You just find a good frontal face pic of a celeb you like and stick it in the extension and it does it so smooth and fast.

And here's some pics so my post isn't just text.
Emma Watson, Lily Collins, Kristen Stewart.
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
Please, Log in or Register to see links and images
 
Last edited:

onewizzzy

Lurker
Feb 6, 2023
3
15
148
0fya082315al84db03fa9bf467e3.png
for first timers like me, i feel this guide is incomplete, i had to go to docker's settings->resources then file sharing to enable file sharing with the folder where I extracted the webui zip to get the download up part to work
but first things first (for those who don't have wsl installed or up-to-date)
I would have had to either enable Hyper-V in Window's Program&Features (its not on by default)
OR
go the long route and install/update WSL to 2.0 (which is quite difficult for the technically challenged like me)
I ended up choosing hyper-v (had to reinstall docker)

i'm at the docker compose --profile auto up --build part and cmd is throwing this error at me:
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown

based on a few google results that I can somewhat understand, i'm downloading nvidia cuda driver and hoping this error is fixed but oooooof...idk then I presume I would have to do some sort of command related to nvidia-docker2 -install which is another rabbit hole like holy i'm so lost

edit:dl'd cuda ,, still threw the same 2 errors i'm stuck but man this was a frustrating huge waste of bandwidth and time
i missed it but there was another error beside nvidia container error:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'

maybe throw in a guide disclaimer to warn windows users that this uses linux distros and etc and if theres even one error then its a huge hurdle to push through, i lost but honestly thought it would be a plug-n-play experience from how simple the guide looked on the surface lol...
hope OP at least considers adding the disclaimer+editing docker settings for future readers
 
Last edited:

Althater

Lurker
Mar 16, 2022
1
4
148
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
The Realistic Vision inpainting model when selecting "V1.3-inpainting" does not have "Pruned Model" nor "Config". The version "V2.0-inpainting" does have "Pruned Model" but also does not have "Config" to download. So I don't know did something changed since the guide was written. What should I download now?
 

kophanzo

Superfan
Jun 26, 2022
27
721
927
0fya082315al84db03fa9bf467e3.png
So if you're like me, tired of inputting same variables to the stable diffusion everytime you launched the web page, here's the solution:

You can edit "ui-config.json" with notepad at the root folder of Stable Diffusion. (Same dir where you launch the webui-user.bat) Just search for the options you want to change and change for whatever you want. I launch the webui first and just search whatever I want and write the options from webui.

Don't forget, if you have already launched the web ui, you can't edit this options file, you need to close the webui server first, or just copy the options file to desktop, edit there and copy back to its location when you close the web ui. Otherwise it'll reset the options.
 

maratrus

Fan
Oct 18, 2022
8
90
325
0fya082315al84db03fa9bf467e3.png
how to make the chest jump? as in this video
IMG_8649c0bbea6035576704.gif
 

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Yeah, it's hit-or-miss.. you can tweak it a bit by using the 'embeddings' mentioned in the guide.

There is also
Please, Log in or Register to see links and images
, try adding it at he front of the prompt for better priority, or you can use ((Style-Unshaved)) to give it more focus.
 
Last edited:

Scott829

Lurker
Mar 13, 2022
5
7
58
0fya082315al84db03fa9bf467e3.png
Hey, i have a question and im not sure this is the place for it. If it is not, please let me know where else to go. But, has anyone figured out how to do cum shots on pictures when you are nudifying while inpainting? Also, does anyone have the prompts that help the chest and stomach match the skin tone a little better?
 

bdude0448

Tier 1 Sub
Jan 12, 2022
9
108
434
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Returning to answer my own question. In /stable-diffusion-portable-main/ find webui-user.bat and open it with whatever notepad program you use and add --lowvram to the line set COMMANDLINE_ARGS= Save it and double click webui-user.bat to run it in low VRAM mode. Been generating way larger images, albeit slower.
 
Last edited:

7toti6

👨‍💻
Mar 10, 2022
589
27,418
1,580
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
You'd need to change the height so it's larger than the source (the default "Just resize" would stretch the source image to fit, but "Resize and fill" will keep the original image aspect ratio and fill in the extra space)

Another option is to use the outpainting script, if you scroll to the bottom and select "Outpainting mk2" under script then you can tell it how many pixels to expand, and in which directions:
Please, Log in or Register to see links and images
 

sinner18

Lurker
Nov 30, 2022
2
3
78
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
I reduced my doing time from 10 mins to 2 mins for a single image using the following settings:
-Sampling Method: DPM++ 2M Karras (one of the best methods that takes half the time compared to some other methods)
-Sampling Steps: 20 (higher steps don't improve results noticeably in my opinion)
-Batch count and Batch size: 1 (generates a single image. Takes less time and gives you quick results so you can make changes to your mask and/or prompt)
Also, when you start SD, it takes much longer to generate the first image. Subsequent images take significantly less time.