Unconditional image generation

Hi, I’m working on generating images using the diffusion model for unconditional image generation. I only have 50 sample images, and I’ve trained the model on these images for 3000 epochs. However, the generated images still appear noisy. Has anyone else encountered this issue while working on unconditional image generation tasks with custom datasets?

here is the code:

!accelerate launch --multi_gpu train_unconditional.py
–dataset_name=“------------”
–resolution=64 --center_crop --random_flip
–output_dir=“--------------”
–train_batch_size=16
–num_epochs=6000
–gradient_accumulation_steps=1
–use_ema
–learning_rate=1e-4
–lr_warmup_steps=500
–mixed_precision=no
–push_to_hub
–checkpointing_steps=200
–resume_from_checkpoint=“latest”