Generating images at each step during DDPM training

I am working to implement CycleGAN setup for image to text translation. Regarding this, I am trying to use DDPM model as an image generator. But since ddpm learns from the noise, I need to have inference step to produce generated images. But if I incorporate the inference step of the diffusion model during the training phase of the CycleGAN, it could lead to backpropagation errors, effectively breaking the computational graph since there will be no gradients. Do you have any suggestions about this ?