Why is the loss of Diffusion model calculated between “RANDOM noise” and “model predicted noise”? Not between “Actual added noise” and “model predicted noise”?
When i trained the U-Net with loss between “the Actual added noise” and “model predicted noise”,
it seems the model be optimized much much faster on my training dataset.
May I use this loss ?
I trained U-Net using the loss with “the actual added noise” that is, the noise added between “t-1” step and “t” step, NOT “random noise”.
then U-Net seems to be optimized faster.
Why we are using “random noise” for diffusion loss calculation and how this be possible?
But I think the 1st formulation has advantage that it’s making the low noise prediction more important than 2nd formulation.
Even if the actual added noise has std only 0.01, the 1st approach would still predict it as std=1, therefore making it 100x more important in the loss. This should result in a model that cares a lot about denoising correctly at the last few inference steps.
Another advantange is probably about forcing the model to always predict noise that has std=1, this might help the model stabilize? I’m not sure about this reasoning though.
In short, it’s all about the trade-offs on the model performance. Research is still going on about which formulation is the best.
You may have no idea how much you’ve helped me.
and I bet there will be many who wonder this one as well.
Hope they find the repo you mentioned.
Thanks a lot, @offchan
Good luck !
Another advantange is probably about forcing the model to always predict noise that has std=1, this might help the model stabilize? I’m not sure about this reasoning though.
Intuitively, this makes a lot of sense. It makes the model output scaling independent of the timestep, right?
Hi I have read the solutions and I am still confused about it. Why we want to use unet to predict the random noise, which is irrelevant with timesteps. If we need the prediction of random noise, why can’t we directly generate a random noise in the reverse process?
@offchan Hi sorry to bother, I am appreciate your answer for this topic. However, I am still confused about it. Why we want to use unet to predict the random noise, which is irrelevant with timesteps. If we need the prediction of random noise, why can’t we directly generate a random noise in the reverse process?
we generate a random gaussian noise with std=1. let’s call it big noise
we scale down the big noise based on timestep. For early timesteps, we’ll scale the big noise down just a little bit. For late timesteps, we’ll scale the big noise down a lot. Now the std will be less than 1. Let’s call the scaled noise as small noise.
we add small noise to the clean image, resulting in a noisy image
we train the unet to predict big noise given noisy image and timestep as input.
Notice in the last step, we are asking the model to predict the big noise we have calculated in step 2. We are not asking it to predict a new random noise.
And timestep is indeed relevant in step 3. One crucial detail here is that timestep is randomly sampled. It’s not sampled sequentially in a for loop like our intuition might suggest. You can read more about the logic behind random timestep in my other answer here
I urge you to look at the training code to gain deeper understanding.
Here’s the code from the text-to-image training script by diffusers:
The clean image is named latents and the big noise is named noise in the code above. But the small noise is not shown here. The line that calls add_noise() function is internally computing the small noise and produces noisy image which is named noisy_latents above.