MEMORIAL DAY - JOIN BRAZZERS FOR FREE - CLICK HERE!
Guide - Fake - Lora/Lycoris Training Guide | Page 2 | SimpCity Forums

Guide Fake Lora/Lycoris Training Guide

  • Welcome to the Fakes / AI / Deepfakes category
    When posting AI generated content please include prompts used where possible.

pkmngotrnr

Bathwater Drinker
Mar 18, 2022
428
10,580
1,437
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
according to this my model:
Please, Log in or Register to see links and images
with over 100.000 steps should look like dog shit bc its overtrained. but the picture quality and extreme facial likeness says otherwise

eg this is my new amouranth model with over 20.000 steps which does not appear to be overtrained even with way more steps than the suggested ~2k
Please, Log in or Register to see links and images
 
Last edited:

ouciot

Fan
Mar 12, 2022
11
71
329
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
well your lora is kinda a special case because most people would go the other way around (you trained at 10k images for i imagine 10 repeats, a lot of folks would do 10 images at 10k repeats), but my point still stands - that many steps will always get either artifacts at some point or diminishing returns
i also dont want to seem like an sd purist (we are all here to jack off after all), but spending this much computing power on a single lora face is kinda useless, you gonna get waaaay better results on a dreambooth or even an embedding (but they are more difficult to train)
 

goat199

Diamond Tier
Mar 13, 2022
38
887
932
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
I only have 8gb of vram so unfortunately I have to stick with 512x512 resolution for now.
I also used some of your models and the quality is pretty good. I'll check the tensorflow board, I never looked at it before.

Anyway, I'll test the settings on your guide and see if I have any improvements on my trainings, thanks!
 

ouciot

Fan
Mar 12, 2022
11
71
329
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
i also recommend watching robert jene's videos on lora training on his yt (probably best resource you gonna get, his explanations of settings with grid comparisions are extremely useful to see what actually does anything)
and always always always pay attention to the dataset you're using, not even the best settings will help you if you fuck around during the dataset selection and tagging
 

pkmngotrnr

Bathwater Drinker
Mar 18, 2022
428
10,580
1,437
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
this! use high quality images! this does not mean high resolution but "clean" images, no filter, no distortion, no covered faces, no too far away subjects etc. also different fecial expressions of the same person makes a HUGE difference in correct facial features
 

goat199

Diamond Tier
Mar 13, 2022
38
887
932
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
I used his settings for my latest trainings and that helped my results improve a lot! I had to change some stuff due to hardware limitations but I'll take a look at captioning and my dataset, but I believe that the pictures that I chose are good for training, I try to focus on the subject face with clear images and different facial expressions.
Anyway, I have to see what it's causing these artifact, though I believe it's probably overtraining.
 

goat199

Diamond Tier
Mar 13, 2022
38
887
932
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Yeah but I'll compare with your captions to see if there's something I can improve.
Btw, I can't download your training data on Civitai, it gives an internal server error message, I don't know if they have a problem on their server or is it on my side so I'll try downloading it again later.
 

DennisWQ

Fan
Nov 27, 2022
7
56
225
0fya082315al84db03fa9bf467e3.png
Hello, I want to train a Lora or TI embedding on a TikTok/Insta girl. Problem is most of her content is videos. I wanted to take screenshots of the videos to use as images for the dataset, but the problem is they always turn out a little bit blurry/low quality, which translates into bad image generation. Is there any way to get better images out of screenshots?
 

greeneye

Bathwater Drinker
Mar 13, 2022
33
2,262
1,242
0fya082315al84db03fa9bf467e3.png
Could someone please point me in the right direction here:

I trained a Lora model by mainly following the instructions from OP, it generated a safetensors file. Then I copied that into stable-diffusion-webui\models\Lora folder. I can see the Lora model under "Lora" tab in SD. However, when I try to generate images in txt2img with a prompt, the results look nothing like the girl in the Lora model that I trained.

In the prompt, I'm specifying the instance prompt and also the model output name but still I get the same results.

Appreciate any help with this, thanks.
 

greeneye

Bathwater Drinker
Mar 13, 2022
33
2,262
1,242
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
Thanks for replying.

The issue seems to be with the SD 1.7 version i'm using. When I generate images using the lora model i created, i get this error message:

NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."

I also get this error when running SD which could be related as well because from I remember, I chose xformers as one of the options in Kohya when training the lora model:
No module 'xformers'. Proceeding without it.
 

goat199

Diamond Tier
Mar 13, 2022
38
887
932
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
It seems like you don't have xformers installed, to be honest I don't really know how to install it, I believe it was done automatically when I installed A1111, there should be some tutorials on youtube explaining how to do that.
By the way, my tutorial is kind of outdated, I've been testing some settings lately and my results improved a lot, once I have some free time I'll post my results here with my current training settings.
 

ouciot

Fan
Mar 12, 2022
11
71
329
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
i got the all nans unet exception when i overtrained my model (i did 15x40x6, last two epoch threw out the error, after i set the no half command line all i got from my lora was latent noise, im working on a 3060 12gb)
i'd recommend following tutorials from robert jene on youtube, especially the kohya settings one
 

greeneye

Bathwater Drinker
Mar 13, 2022
33
2,262
1,242
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
From what I've read, it could be a bug in the SD version 1.7 as well. When I train my model with WD14 captioning in Kohya I seem to get this issue. When I use Blip captioning instead to train, I can generate the images and don't get these errors.
 

greeneye

Bathwater Drinker
Mar 13, 2022
33
2,262
1,242
0fya082315al84db03fa9bf467e3.png
Please, Log in or Register to view quotes
When I use Blip captioning like you suggested in your tutorial, I don't get these issues so not sure why they happen with WD14 captioning. I got semi good results by just following your tutorial so would be great if you could share the training settings you're using now.