`get_peft_model` or `model.add_adapter`

Hi. I am trying to train a Lora adapter with Quantization over Llama2 7b. My Lora config is like this:

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.1,
    r=64,
    bias="none",
    task_type=TaskType.SEQ_CLS,
)

My question is that is this the correct way to use QLora for sequence classification (is that a well defined thing?) and if so, which of the following lines are the correct way to setup a (4-bit quantized) Llama2 model with it:

model.add_adapter(peft_config)

or

model = get_peft_model(model, peft_config)

Many thanks for your help!

2 Likes