How to unload an adapter in PEFT?

Hello,

I’m using the PeftModel.from_pretrained(base_model, lora_model_id) method to load a LoRA adapter on a base LLM. The method seems to be directly modifying the base_model weights.

Is there a way to “unload” an adapter to get the original base_model weights back? I want to be able to switch between adapters in real-time for multi-task inference.

Thanks!

Yes - check this: LoRA
There’s a exactly what you asked for, the unload() method.

However, I’m not sure how to load the LoRA adapter back after the unload. I guess it’s model = PeftModel.from_pretrained(model, peft_model_id) because there seems to be no load() method. Please let me know if you find a better solution.

1 Like

Just stumbled across this post, not sure if it is something other folks are looking into, but it seems like the latest versions of transformers/peft has a method unload_adapters and load_adapters. Here in the docs. This made this much easier!