Is there such thing as an unconditional img2img pipeline?

There is unconditional image generation that just generates images from random noise.
Train a diffusion model demonstrates this with the butterfly example.

However what if I want to create a butterfly variation? Is there a pipeline that takes an image adds noise and “unconditionally” generates a variation of the image from there? As far as I can see pipelines such as DDPMPipeline don’t allow this kind of use case.

I realize that stable diffusion can do all that but then you have have all the language/clip stuff that you don’t necessarily need.

Am I missing something?

Thank you :slight_smile: