Fine-tuning A Causal model for a Classification Task using LoRA

I am trying to fine tune a Causal LLM to use it as a Classifier. Based on my understanding I can add a Classification head to the model and train it on the classification dataset.

Regarding the classification head should it replace the output layer of the model or should I create another layer that takes the output of the output layer as an input and outputs 1 output (corresponding to the class)?

I also want to use LoRA since I am restricted in the resources that I have. My question is should I add the classification head, then apply LoRA? or should it be the other way around?

1 Like

should I add the classification head, then apply LoRA?

Yeah. Seems yes.