Help with wiping gradients from UNet2DConditionModel

I am working with the “CompVis/ldm-text2im-large-256” building on top of the prompt-to-prompt code.

model = DiffusionPipeline.from_pretrained(model_id, height=IMAGE_RES, width=IMAGE_RES).to(device)

Whenever I call the text2image_ldm method without torch.no_grad, gradients accumulate on line:

noise_pred = model.unet(latents_input, t, encoder_hidden_states=context)[“sample”]

I want to be able to use gradients to edit outputs but later wipe them out as well. I have tried model.unet.zero_grad() and

for param in module.parameters():
     param.grad = None
     torch.cuda.empty_cache()

but both do not resolve the gradient accumulation issue. How do I delete gradients for this pipeline?

Diffusers Version: diffusers==0.3.0