I am new to dreambooth.
I have a large set of images consisting of various paddy leaf species. I have 248 different classes (species) and around 100 images per class. From these class images, STEP 1: I want to generate more images for each class. And as the next step, STEP 2: I want to make hybrids among the classes. For instance, "give me a paddy leaf sample where the leaf texture is like class X; the leaf shape is like class Y; and leaf size is like class Z.
My plan is to use Dreambooth and Stable Diffusion for this task.
I’ve setup the data directories with class images and instance images. The class images directory contains 1 image from each class making up to 248 images. The instance directory contains 3 images from class X.
The script I am using for this is diffusers/train_dreambooth.py at main · huggingface/diffusers · GitHub. I’ve modified the script to use it as a module by replacing the argparser. Rest is the same. This is my code.
train_dreambooth( pretrained_model_name_or_path=MODEL_NAME, instance_data_dir=INSTANCE_DIR, class_data_dir=CLASS_DIR, output_dir=OUTPUT_DIR, with_prior_preservation=True, prior_loss_weight=1.0, instance_prompt="a photo of X paddy leaves", class_prompt="a photo of paddy leaves", resolution=512, train_batch_size=1, gradient_accumulation_steps=1, gradient_checkpointing=True, use_8bit_adam=True, enable_xformers_memory_efficient_attention=True, set_grads_to_none=True, learning_rate=2e-6, lr_scheduler="constant", lr_warmup_steps=0, # num_class_images=200, max_train_steps=800 )
When I set the class directory and instance directory and train, it shows that class images are being generated.
Generating class images: 88%|████████▊ | 22/25 [09:36<01:19, 26.43s/it]
What am I doing wrong?
I am planning to change the same model for each instance to add the grade to the model. Then to generate hybrids. Is it correct
Any help is very much appreciated,