I want to load the Flux base model and the lora weights from one of the checkpoints, and merge the lora weights into the base model. Following the Lora documentation here LoRA I tried the following:
`
from peft import PeftModel
from diffusers import FluxTransformer2DModel
I have tried using the pipeline load_lora_weights() as well and that works great for running inference with the pipeline.
However, I do not see an obvious way how to merge the LoRA weights into the base model to be able to use just the merged model for example for further training on different datasets.
If you want to apply LoRA only to the transformer, I think you can just torch.load_file with the safetensors library as just a torch model, then apply LoRA with PEFT and save_file.
Now all you should have to do is set up LoraConfig and do get_peft_model(), but I don’t know the proper contents of LoraConfig in this case.
Usually Pipeline internal does it on its own…