Fine-tuning LLM for regression yields low loss during training but not in inference?

Hi,

Thanks @nielsr for the guidance - I am also training a regression model, but find when I print my labels inside the trainer, the model is printing the input_ids instead of the regression labels. Any idea where it might be interpreting the task as a causal lm task? I made the changes in task type in the lora config, and my model is an AutoModelForSequenceClassification