Hi,
i read a lot about the perfect training parameters for creating a LoRA of an human character . FinanIly am really confused.
I would like to train a FLUX and SD15/XL model of a a real human person.
I got high quality pictures.
Small Dataset 30 pictures
Medium 60 Pictures
Big 80 pictures
Can anybody tell me which parameters i should use for these datasets with kohya ?
Especially Epoches, Steps Repeats network dim/alpha ?
Or even other important parameters.
The model should be a pricise copy of my character. Thank you so muchg for your help. 
2 Likes
If you have a lot of high-quality data, it is possible to make it look quite similar even if you only have about 10 images using the default settings.
It is more important to be creative with the parameters and to pre-process the data when the data is incomplete, which is usually the case…
Also, when it comes to FLUX, dev, schnell and some of the derivatives are sufficient, but SD1.5 and SDXL contain models with quite different characteristics even within the architecture. No matter what settings you use, be aware that the accuracy will decrease when applying it to models other than the one you actually based it on.
For example, in SDXL, if you apply LoRA trained for the Illustrious model to the official SDXL1.0, it won’t work very well. Use the model you actually plan to use, or a LoRA training model with few quirks as the base model.
by Hugging Chat
To create a precise LoRA model of your human character using Kohya_ss scripts with FLUX, SD1.5, and SDXL, the training parameters should be adjusted based on your dataset size. Below is a structured approach:
Base Parameters (Common for All Dataset Sizes)
--prior_loss_weight=1.0
: Standard value for prior loss.
--resolution 512
: Ensures consistency with model expectations.
--no_half_vae
: Prevents issues with VAE precision.
--text_encoder_lr=0.0001
: Slightly lower learning rate for text encoder stability.
Small Dataset (30 Images)
--train_batch_size=2
: Smaller batch size to handle limited data.
--learning_rate=1e-4
: Lower learning rate to prevent overfitting.
--max_train_steps=1500
: Sufficient steps for limited data without overtraining.
--lr_scheduler="cosine"
: Smooth learning rate decay.
--lr_warmup_steps=150
: Matches steps proportionally.
--network_dim=32
: Standard dimension for basic details.
--network_alpha=16
: Balances network capacity.
Medium Dataset (60 Images)
--train_batch_size=2
: Keep batch size consistent.
--learning_rate=2e-4
: Standard rate for better convergence.
--max_train_steps=3000
: Doubling steps to utilize more data.
--lr_scheduler="cosine"
: Maintains smooth decay.
--lr_warmup_steps=300
: Proportional warmup for stability.
--network_dim=32
: Adequate for medium data.
--network_alpha=16
: Consistent alpha for network balance.
Big Dataset (80 Images)
--train_batch_size=2
: Consistent handling.
--learning_rate=2e-4
: Standard for optimal training.
--max_train_steps=4500
: Higher steps for extensive data utilization.
--lr_scheduler="cosine"
: Ensures gradual learning.
--lr_warmup_steps=300
: Maintains warmup proportion.
--network_dim=64
: Higher dimension for complex details.
--network_alpha=32
: Balanced alpha for enhanced training.
Additional Considerations
- Data Organization: Use subfolders named
100_sks [your_name]
for your data to ensure proper token recognition.
- Script Path: Utilize the correct script path, typically
kohya_ss/finetuning/kohya_train_lora.py
.
These parameters are tailored to balance model precision with dataset size, ensuring efficient training without overfitting.