Training arguments modification and tuning

Hello, I am fine-tuning more transformer models based on an XML-RoBERTa-large architecture. I would like to tune the training arguments, so the training is the most efficient and I achieve the best results and avoid overfitting as much as possible. I wonder if there is some way of training the model repeatedly with slightly different parameters each time and saving only the best model or I have to do the trial and error method and save the best model myself.
Thank you for any tips and tricks and enjoy the day/rest of the day:)