Training a pre-trained model and fine-tuning it later

For an exercise in fine-tuning using PEFT I was suggested to use GPT-2. The task is text classification, assume a dataset with two categories, e.g., sentiment analysis. When I load GPT-2 like this

model = AutoModelForSequenceClassification.from_pretrained(
    model_name,
    num_labels=2,
    id2label=id2label,
    label2id=label2id
)

I get the warning

Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: [‘score.weight’] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Question 1: Does it actually make sense to use this model for fine-tuning without finalizing the training? Does it make sense to compare the performance of the pre-trained model to a fine-tuned version thereof? I noticed that the accuracy of the pre-trained GPT-2 is close to 0.5 on the task.

Next, I trained GPT-2 on the particular dataset, and then used PEFT to fine-tune that custom-trained model using LoRA which appeared to work. Then I saved the model. When I loaded the model later back, I got the very same warning message (“Some weight of … were not initialized … etc.”)

Question 2: What should I do to use PEFT on my custom-trained model such that when I load the PEFT model, it refers to my custom-trained model and not the original pre-trained GPT-2?