I used training_batch_size=3 for 32x32, and this led to the best convergence on my dataset(size=700 images, augmentation can’t be done). Now, I wanted to scale up the resolution to 512x512 . Using the same parameters and I am not getting convergence at, say, after 200 epochs. What should be ideal params values for DDPM training on small high res. batch ? The args of the script are the same as of the official example Training Unconditional.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Issue with DDPM Training on Stanford Cars: Noise-Only Samples with Small Batch Sizes | 1 | 84 | March 26, 2025 | |
Where to change sample size for training of stable diffusion pipeline? | 0 | 627 | December 18, 2022 | |
UNet1DModel not converging on single batch | 3 | 665 | June 19, 2023 | |
Denoising Diffusion Probabilistic Models (DDPM) - reconstruction is not sharp but blurry and noisy | 1 | 803 | April 4, 2024 | |
Why does textual inversion example scale learning rate? | 5 | 1338 | May 22, 2023 |