Correct way to load multiple LoRA adapters for inference

Found a solution!

Instead of loading PeftModel from base directory, I instead loaded it from adapter_1 then I loaded adapter_2 and used both for inference.

adapter_1 = "/path/to/model/adapter_1"
adapter_2 = "/path/to/model/adapter_2"

base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)

peft_model = PeftModelForSequenceClassification.from_pretrained(base_model, adapter_1, num_labels=2)
peft_model.load_adapter(adapter_1, adapter_name="adapter_1")
peft_model.load_adapter(adapter_2, adapter_name="adapter_2")
peft_model.base_model.set_adapter(["adapter_1", "adapter_2"])
1 Like