ClientErro:400 when using batch transformer for inference

The model cardiffnlp/twitter-roberta-base-sentiment don’t has a max len defined. I tried to reach out the authors but they haven’t responded. See Add `tokenizer_max_length` to `cardiffnlp/twitter-roberta-base-sentiment` · Issue #13459 · huggingface/transformers · GitHub

You could “fork” the model → creating a new model repository and push the weights + add the tokenizer_config then truncation:True should work properly.