I have a basic model and some trained adapters. How can I switch between them?
Based on the documentation, this is done using the set_adapter
method.
However, if you look at the code, the add_adapter
, set_adapter
, enable_adapters
methods do not interact in any way with the coefficients stored in the adapter_model.safetensors
file. The only method that uses the saved adapter coefficients is the load_adapter
method. I put a breakpoint in the load_file
method in the torch
file of the safetensors
package and none of the methods except load_adapter
go there.
The question arises: how then do the recommended methods (add_adapter
, set_adapter
, enable_adapters
) use the coefficients of the trained adapter if they don’t even load them?
I am experiencing the same issues when switching between multiple adapters despite following the documentation, I do not observe any change in the model’s behavior when switching between the adapters.
base_model.add_adapter(PeftConfig.from_pretrained(peft_model_output_dir1), adapter_name="adapter_1")
base_model.add_adapter(PeftConfig.from_pretrained(peft_model_output_dir2), adapter_name="adapter_2")
When I switch between the adapters using the set_adapter method, there is no observable change in the model’s behavior. The outputs remain the same, regardless of which adapter is active.
I suspect that the set_adapter method does not actually activate the specified adapter correctly. Instead, I notice a change in behavior only when I merge the adapter with the base model
The documentation does not mention the need to perform a merge when switching adapters. Additionally, the methods add_adapter, set_adapter, and enable_adapters do not appear to work
Please provide clarification on how to correctly switch between adapters
Pinging @ybelkada here