Correct way to load multiple LoRA adapters for inference

I have trained two LoRA Adapters on top of the same base model. I saved the adapters with model.save_pretrained() Right now, I am trying to load both adapters for inference. My current approach is this:

base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)
model = PeftModelFromSequenceClassification.from_pretrained(base_model, adapter_1, adapter_name="adapter_1", num_labels=2)

weighted_adapter_name="two-lora"
model.load_adapter(adapter_2, adapter_name="adapter_2")

model.add_weighted_adapter(
    adapters=["adapter_1", "adapter_2"],
    weights=[0.7, 0.3],
    adapter_name=weighted_adapter_name,
    combination_type="linear",
)

But this gives me the error Cannot add weighted adapters if they target the same module with modules_to_save, but found 1 such instance(s).

Then, I tried another method from this documentation

base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)
model = PeftMixedModel.from_pretrained(base_model, adapter_1, adapter_name="adapter_1")

model.load_adapter(adapter_2, adapter_name="adapter_2")
model.set_adapter(["adapter_1", "adapter_2"])

But this too throws an error ValueError: Only one adapter can be set at a time for modules_to_save.

I don’t understand what I am doing wrong. Should I try this:

  • get_peft_model with base_model and adapter_1
  • train this adapter
  • add_adapter with adapter_2 to this model
  • train second adapter

But with this approach how would I load both adapters for inference?

1 Like

Like this?

1 Like

Thanks for the reply! I tried this and it works perfectly. But, when I try to save the model and load it from local directory, I get the error ValueError: Can't find 'adapter_config.json' at '/path/to/model'. I have tried pushing the model to hub and then loading it, still the same error. I can see there is no adapter_config.json at the path. The json files are actually inside new directories for the adapters.

The file structure is like this:

model
|____adapter_1
|    |_____adapter_config.json
|    |_____adapter_model.safetensors
|____adapter_2
|    |_____adapter_config.json
|    |_____adapter_model.safetensors
|____special_tokens_map.json
|____tokenizer.json
|____tokenizer.config.json
|____vocab.txt
|____README.md

I am trying to load the model with adapters like this (the code is from this discussion):

outputs = "/path/to/model"
adapter_1 = "/path/to/model/adapter_1"
adapter_2 = "/path/to/model/adapter_2"

adapter_1_config = PeftConfig.from_pretrained(adapter_1)
adapter_2_config = PeftConfig.from_pretrained(adapter_2)

base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)

peft_model = PeftModelForSequenceClassification.from_pretrained(base_model, outputs, num_labels=2)
peft_model.load_adapter(adapter_1)
peft_model.load_adapter(adapter_2)
1 Like

Found a solution!

Instead of loading PeftModel from base directory, I instead loaded it from adapter_1 then I loaded adapter_2 and used both for inference.

adapter_1 = "/path/to/model/adapter_1"
adapter_2 = "/path/to/model/adapter_2"

base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)

peft_model = PeftModelForSequenceClassification.from_pretrained(base_model, adapter_1, num_labels=2)
peft_model.load_adapter(adapter_1, adapter_name="adapter_1")
peft_model.load_adapter(adapter_2, adapter_name="adapter_2")
peft_model.base_model.set_adapter(["adapter_1", "adapter_2"])
1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.