Multi-Output Regression using Pre-trained LLM (Roberta)

I have been trying to fine-tune a pre-trained large language model (Roberta) for multi-output regression. The dataset has text and 5 corresponding scores for 5 personality traits. When I execute, the accuracy doesn’t increase , not even a 0.1%. The code is an adaptation of Roberta for Sequence classification model for regression.
Can anyone please guide what can be the possible problem.
I have tried changing parameters but all in vain. the accuracy is stuck at 0%.
The compute metrics and training with multi-output regression

I have tried changing parameters but all in vain. the accuracy is stuck at 0%.I tried smaller dataset but still no improvement in the output.

Could you share some details of the code?