Truncating sequence -- within a pipeline

I see, do you think you could share a snippet of how you did that? I am unsure of what to do with the output of a model. Thanks again!

model_name = "distilbert-base-uncased-finetuned-sst-2-english"
pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

pt_batch = tokenizer(
    "We are very happy to show you the 🤗 Transformers library.",
    truncation=True,
    max_length=10,
    return_tensors="pt"
)

pt_outputs = pt_model(**pt_batch)

pt_outputs

Which gives…

SequenceClassifierOutput(loss=None, logits=tensor([[-4.2644, 4.6002]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)