"You cannot perform fine-tuning on purely quantized models." error in LoRA model training?

I am training my LoRA model for fine tunning llms task, while training i got the following error, i had already add the adapters and LoRAConfig by following the suggest page:Load adapters with 🤗 PEFT , but still got the problem, always facing the same issue, can anyone


from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training
import transformers
LORA_R = 256 # 512
LORA_ALPHA = 512 # 1024
LORA_DROPOUT = 0.05

# Define LoRA Config
peft_config = LoraConfig(
                 r = LORA_R, # the dimension of the low-rank matrices
                 lora_alpha = LORA_ALPHA, # scaling factor for the weight matrices
                 lora_dropout = LORA_DROPOUT, # dropout probability of the LoRA layers
                 bias="none",
                 task_type="CAUSAL_LM",
                 target_modules=["query_key_value"]
)

# Prepare int-8 model for training - utility function that prepares a PyTorch model for int8 quantization training. <https://huggingface.co/docs/peft/task_guides/int8-asr>
model2 = prepare_model_for_int8_training(model)
# initialize the model with the LoRA framework

# model3 = get_peft_model(model2, lora_config)
# model.print_trainable_parameters()

model2.add_adapter(peft_config = peft_config, adapter_name = "adapter_1")

# training the model 
trainer = transformers.Trainer(
    model=model2,
    tokenizer=tokenizer,
    args=training_args,
    train_dataset=split_dataset['train'],
    eval_dataset=split_dataset["test"],
    data_collator=data_collator,
)
model.config.use_cache = False  # silence the warnings. Please re-enable for inference!
trainer.train()

The error is:

I just don’t understand, i had already add my adapters, why does it always showing that i am performing on purely quantized models that need to attach adapters??

Hey @SingaporeBirdie, did you solve the above problem?

1 Like

Hey @SingaporeBirdie , @jesuisduc ,

any of you able to resolve the issue?

1 Like

Try restarting the kernel :muscle: :muscle: