Is there an response length limit for the inference API?

Hi, I am testing the Inference API with different models to rewrite texts. But no matter which model I choose, only about 2 - 3 sentences are returned as response. How can this be adjusted?