NLP Truncation Parameter for Serverless Endpoint

Hello everyone! I put the question in the beginner category, because I think and hope that what I’m about to ask is my shortcoming.

I am testing several pre-trained models that I find on the hub for Text-Classification. Many have a max_length=512. I deploy them to SageMaker Endpoint Serverless and invoke them from a lambda.

In an old question of mine on the Forum, it was suggested that I include the parameter truncation=True.

I would like to know, if the text is longer, is it truncated? Do I then lose the information about the “excess” part or is each batch sentence evaluated and a result for the whole document processed?

Can I also know where I find the parameters that I can pass to my inference endpoint as input?