Guide Installing Stable Diffusion Webui & Nudifying (Inpainting) Guide

  • Welcome to the Fakes / AI / Deepfakes category
    When posting AI generated content please include prompts used where possible.

abcdfr1

Lurker
Mar 13, 2022
1
0
61
0fya082315al84db03fa9bf467e3.png
Hi I wanted to comment 6GB VRAM (RTX 2060) is plenty for inpainting. For resolutions in my experience you can't go much higher than about 600x600 before out of memory errors kick in.

Could I get a source on the 6gb training for embeddings?
 
Feb 9, 2023
1
0
46
0fya082315al84db03fa9bf467e3.png
can't get it to work. i'm pretty sure i've set everything up right and when i set the prompts, mask the image and hit Generate it immediately fails and gives this error:
Code:
Please, Log in or Register to view codes content!
Im not 100% sure if its my GPU which is really outdated (GTX 770) can someone confirm that for me

ive been googling a shit ton and can't find anything to fix this
 

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Oh, I thought it was installed by default.

What you can try is to go to Extensions, Install from URL tab, and then enter
Please, Log in or Register to see links and images
and install.

Afterwards hit "Apply and Restart UI" from the Installed tab. That should normally install xformers in the docker container.

It also installs additional training functionality under the "Koyha sd-scripts" tab.
 

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Unfortunately, it seems the GTX 770 has only 2 GB of VRAM ? The guide mentions that you would need a bare minimum of 4 GB of VRAM and even 4 GB is stretching it.

If your GPU does have 4 or more GB of graphics VRAM, you could try installing xformers as mentioned in my previous post.
But not 100% sure it will fix it in this case.
 

race124

Lurker
May 31, 2022
2
0
49
0fya082315al84db03fa9bf467e3.png
Getting "RuntimeError("Cannot add middleware after an application has started")" when trying the second command after unzipping the file. Any help ? I followed instruction to a T. I've tried reinstalling docker.
 

Drakonis

Lurker
Mar 12, 2022
2
0
46
0fya082315al84db03fa9bf467e3.png
I've used textual inversions with some degree of success, but i'm melting my neurons on how to add LoRa models to automatic1111 docker, anyone has achieved this?
 

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
If you use the docker version (as in this guide), you can put the LoRa models in <project folder>/data/Lora (should already exist). Then you go to the webui and click on the 3rd icon at the right (below the Generate button).

3c6g982cunda1.png


This will show all the embeddings/Lora available below the model parameter settings. You can click on "Lora" and should see all the LoRa models available, and if not, you can use the refresh button to reload them.
 
  • Like
Reactions: Drakonis

Drakonis

Lurker
Mar 12, 2022
2
0
46
0fya082315al84db03fa9bf467e3.png
Got LoRa running, but i had CUDA memory problems (lora eats vram), in case anyone experiments the same issue, this solved it for me:

./webui.sh --listen --disable-safe-unpickle --deepdanbooru --medvram --xformers --opt-split-attention
 

thnx10

Bathwater Drinker
Feb 15, 2023
38
1,124
1,012
0fya082315al84db03fa9bf467e3.png
I have installed stable deffuison Automatic 1111 by python will this work?
 

softarchitect

Tier 3 Sub
May 16, 2022
55
419
512
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes

6 GB VRAM "should" be enough, if not, you can install xformers and use the --xformers parameter which should bring the VRAM usage down. You could also use the --medvram or --lowvram parameters when starting the webui. More info:
Please, Log in or Register to see links and images

But using medvram or lowvram move a lot to RAM, so you would need more RAM.

16 GB RAM is quite low.. you can try, but the minimum recommendation is 32 GB RAM, esp. when using medvram (docker default) or lowvram parameters.

But try it out, and let us know if it works, and the specific parameters.