Hi @philschmid! No, currently I don’t.
I know that this solution exists but I’m wondering whether there is a way to configure the model itself to apply truncation by default.
The intuition behind it is to make the model available in the most simple way. So, my users should not worry about adding “nlp specific” parameters to their requests.
I’ll give truncation
a try, just to see if that would be a workaround. Could I also pass max_length
in the request?