How to set generate parameters in fine-tuning

I’m working with the Helsinki-NLP/opus-mt-en-es model for translation. I found that when generating sequences, it was helpful to set the repetition_penalty parameter. Now I want to fine tune the model with my own data and I would like to include a repetition_penalty. But I can’t see where to set this parameter in the Trainer API (Seq2SeqTrainer). Any help would be appreciated.

I also encounter a similar question when I want to evaluate the fine-tuned model, the model only generates 20 new tokens, but I want it to generate 256. Do you find some possible solutions?