Inference API - Response of Higher Length

How can increase the max_length of the reponse from the inference api for values higher than 500? Is this limit set for all models or only just for some?