How do I setup a TextClassificationPipeline that truncates token sequences

I’m using a TextClassificationPipeline from a pretrained model (“bhadresh-savani/roberta-base-emotion”), and I would like it to truncate inputs to the maximum token sequence length. This is not happening by default.

My code to set up the pipeline looks like this:

tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/roberta-base-emotion")  # , inputs=
model = AutoModelForSequenceClassification.from_pretrained("bhadresh-savani/roberta-base-emotion")
classifier = TextClassificationPipeline(model=model, return_all_scores=True, tokenizer=tokenizer, device=0)