# Why is the loss of Diffusion model calculated between "RANDOM noise" and "model predicted noise"? Not between "Actual added noise" and "model predicted noise"?

Why is the loss of Diffusion model calculated between â€śRANDOM noiseâ€ť and â€śmodel predicted noiseâ€ť? Not between â€śActual added noiseâ€ť and â€śmodel predicted noiseâ€ť?

When i trained the U-Net with loss between â€śthe Actual added noiseâ€ť and â€śmodel predicted noiseâ€ť,
it seems the model be optimized much much faster on my training dataset.
May I use this loss ?

Anybody has insight ?

sorry Iâ€™m not sure I understand. what is the alternative noise youâ€™re talking about?

In the above picture, â€śnoiseâ€ť is randomly sampled noise by â€śnoise = torch.randn(sample_image.shape)â€ť and is used for loss calculation.

But I think â€śActually added noiseâ€ť should be used for loss calculation.
the noise added between the â€śt-1â€ť step and â€śtâ€ť step.

Why we are using random noise for loss calculation?

I trained U-Net using the loss with â€śthe actual added noiseâ€ť that is, the noise added between â€śt-1â€ť step and â€śtâ€ť step, NOT â€śrandom noiseâ€ť.
then U-Net seems to be optimized faster.

Why we are using â€śrandom noiseâ€ť for diffusion loss calculation and how this be possible?

The training process is like this:

1. we generate a random noise vector with std=1
2. we scale it down to have std<1
3. we add the scaled down noise to the image
4. we predict the noise with std=1

If I understand correctly, you are asking why we are predicting the noise with std=1 instead of the one with std<1, right?

There are many ways to formulate the output of the model:

1. predict the noise with std=1 (normalized noise)
2. predict the noise with std<1 (actual added noise)
3. predict the original clean image itself

You can actually try all these formulations. In fact, someone has tried it using the 3rd formulation on a toy project and it also works. Check the 2nd notebook in this repo here: diffusion-models-class/unit1 at main Â· huggingface/diffusion-models-class Â· GitHub

But I think the 1st formulation has advantage that itâ€™s making the low noise prediction more important than 2nd formulation.
Even if the actual added noise has std only 0.01, the 1st approach would still predict it as std=1, therefore making it 100x more important in the loss. This should result in a model that cares a lot about denoising correctly at the last few inference steps.

Another advantange is probably about forcing the model to always predict noise that has std=1, this might help the model stabilize? Iâ€™m not sure about this reasoning though.

In short, itâ€™s all about the trade-offs on the model performance. Research is still going on about which formulation is the best.

2 Likes

You may have no idea how much youâ€™ve helped me.
and I bet there will be many who wonder this one as well.
Hope they find the repo you mentioned.
Thanks a lot, @offchan
Good luck !

1 Like

Another advantange is probably about forcing the model to always predict noise that has std=1, this might help the model stabilize? Iâ€™m not sure about this reasoning though.

Intuitively, this makes a lot of sense. It makes the model output scaling independent of the timestep, right?

Yeah, thatâ€™s what I think.

Hi I have read the solutions and I am still confused about it. Why we want to use unet to predict the random noise, which is irrelevant with timesteps. If we need the prediction of random noise, why canâ€™t we directly generate a random noise in the reverse process?

1 Like

Same question. Have you found the explanation yet?

@offchan Hi sorry to bother, I am appreciate your answer for this topic. However, I am still confused about it. Why we want to use unet to predict the random noise, which is irrelevant with timesteps. If we need the prediction of random noise, why canâ€™t we directly generate a random noise in the reverse process?

Each training step is like this:

1. we have a clean image from the training data.
2. we generate a random gaussian noise with std=1. letâ€™s call it big noise
3. we scale down the big noise based on timestep. For early timesteps, weâ€™ll scale the big noise down just a little bit. For late timesteps, weâ€™ll scale the big noise down a lot. Now the std will be less than 1. Letâ€™s call the scaled noise as small noise.
4. we add small noise to the clean image, resulting in a noisy image
5. we train the unet to predict big noise given noisy image and timestep as input.

Notice in the last step, we are asking the model to predict the big noise we have calculated in step 2. We are not asking it to predict a new random noise.
And timestep is indeed relevant in step 3. One crucial detail here is that timestep is randomly sampled. Itâ€™s not sampled sequentially in a for loop like our intuition might suggest. You can read more about the logic behind random timestep in my other answer here

I urge you to look at the training code to gain deeper understanding.
Hereâ€™s the code from the text-to-image training script by diffusers:

The clean image is named `latents` and the big noise is named `noise` in the code above. But the small noise is not shown here. The line that calls `add_noise()` function is internally computing the small noise and produces noisy image which is named `noisy_latents` above.