How do I create a text-classification pipeline with a PEFT/LoRA trained model? My (partial) code is
model_checkpoint = 'roberta-base'
model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint, num_labels=2, id2label=id2label, label2id=label2id)
...
peft_config = LoraConfig(task_type="SEQ_CLS",
r=4,
lora_alpha=32,
lora_dropout=0.01,
target_modules=['query'],
)
model = get_peft_model(model, peft_config)
training_args = TrainingArguments(
output_dir=model_checkpoint + "-lora-text-classification",
learning_rate=lr,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_epochs,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
report_to=None,
)
# creater trainer object
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
But the last step generates this error
The model ‘PeftModelForSequenceClassification’ is not supported for text-classification. Supported models are […].
What is the correct way to make pipeline with PEFT/LoRA trained model?