How to use lora with SDXL img2img?

I am trying to apply a lora to the SDXL refiner img2img pipeline. I’ve tried multiple sdxl loras that work with the base model and pipeline but when i try them with StableDiffusionXLImg2ImgPipeline and the refiner model it errors (I have set low_cpu_mem_usage=False and ignore_mismatched_sizes=True to no avail)

StableDiffusionXLImg2ImgPipeline has load_lora_weights so I’m assuming this should work.

Is this possible and if so what am I missing?

{'lora': 'Norod78/SDXL-Caricaturized-Lora', 'subfolder': None, 'weight_name': 'SDXL-Caricaturized-Lora.safetensors'}
It might be incompatible with stabilityai/stable-diffusion-xl-refiner-1.0
Cannot load because down.weight expected shape tensor(..., device='meta', size=(8, 1536)), but got 
torch.Size([8, 1280]). If you want to instead overwrite randomly initialized weights, please make sure to 
pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. 
For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

I am having the same issue
it’s not clear if you have to reload also lora and textual inversion embedding on the refiner if you use the base model to init the refiner pipeline to me
have you found a solution?

I tried to load them in different order but nothing works

1 Like

I’m having the same issue. Is there any example that you’ve found of people doing this successfully?

I don’t think you can use a Lora with the refiner. Using the base model stabilityai/stable-diffusion-xl-base-1.0 and StableDiffusionXLImg2ImgPipeline works. And then run it through the refiner without the lora.

1 Like