How can I change the max_length of my own model in huggingface inference API?

I have build a model which is here: Omaratef3221/flan-t5-base-dialogue-generator. The model generates dialogues from small texts so its a text2text generation. When I try using the model with the huggingface inference API in the model home page on the hub it has a limited max_length. I would like to increase this max_length in the model setting in the hub.

Is there a way of doing that?