Help implementing Tiled Diffusion and Tiled VAE with Diffusers

Hi everyone,

I’m working on a Python script using the :hugs: Diffusers library. My goal is to take an input image and return an upscaled, more detailed version of it, essentially magnified and enhanced.

I’m using the Juggernaut Reborn checkpoint and would like to implement Tiled Diffusion and Tiled VAE to improve performance and quality on high-resolution images.

Additionally, I’m using two LoRAs: more_details and SDXLrender_v2.0.

The problem is: I’m having trouble finding up-to-date or clear documentation on how to integrate Tiled Diffusion and Tiled VAE with Diffusers, especially in an image2image pipeline.

Has anyone successfully implemented this setup? Any example code, resources, or guidance would be greatly appreciated!

Thanks in advance!

1 Like

It’s going to be implemented soon, but it’s not there yet…

1 Like

Thank you for the update. I’m working with high-resolution images, and I was wondering if you have any suggestions on how to proceed without Tiled Diffusion for now?

1 Like

If you want to complete it with Diffusers, this is probably the only way. There are quite a few AI models for upscaling, so I think there are also ways to use them separately.

1 Like