Can you set max_new_tokens in Azure ML studio

Deploying the google-flan-t5-large model in Azure ML studio. When I come to test the output is cutoff and pretty sure its a result of max_new_tokens or similar. Running the model on a decent size CPU with enough ram. Can you set max_new_tokens within the Azure ML model deployment?