Dreambooth finetuning does yield expected result

This is my first attempt at training a dreambooth checkpoint using the following script:

I followed the instructions at the following link to train on the dog dataset:

My base SD model is: “runwayml/stable-diffusion-v1-5”

I saved the checkpoint to my local folder and used it to generate T2I. But I dont see the dog that was used in the finetuning show up in the inferred result. My training config is as follows. All the other params are as in the default training file above.

  • max_train_steps: tried between 400 to 1000
  • enabled train_text_encoder
  • learning_rate: tried, 5e-4, 5e-6, 1e-6.

I would appreciate any help on how I can get the model to generate the images from the fine-tuning dataset.

My inference script is as follows:

model_base = “runwayml/stable-diffusion-v1-5”
pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
db_ckpt = “lora_db_dogs”
prompt = “A photo of sks dog in a bucket”
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]