I have fine-tuned 4-bit quantized gemma-2b on some task. The finetuned model has ‘default’ PEFT/LORA adapter. I want to train this model further on some other task by adding one more adapter. I am loading this fine-tuned model as PEFT model using AutoPeftModelForCausalLM class. When trying to add adapter to the loaded model [model.add_adapter(peft_config, adapter_name ="t2")] , I am getting TypeError: PeftModel.add_adapter() got multiple values for argument 'adapter_name'
Has anyone tried training a base model for multiple tasks using PEFT?? If yes, please help!