Provide CLIP embeddings directly to diffuser

Is there any way to provide the embeddings directly to the diffusion pipeline? I specifically want to provide CLIP embeddings coming from a different process, directly to the diffusion pipeline, in place of the text prompt, but I could not find any simple way to do it in the documentation.

Edit:
After investigating the code I noticed the “prompt_embeds” arg exists on the pipeline, but im still trying to pass in image embeddings from CLIP.

E.g
pipeline(prompt=“a dog sitting on a bench”, image=img, mask_image=mask_image).images[0]

Instead of the “prompt”, I can provide clip embeddings.

1 Like