Tokenization of Sequencepair for Pipeline

Hi,

for my model I tokenize my input like this:

encoded_dict = xlmr_tokenizer(term1, term2, max_length=max_len, padding='max_length',truncation=True, return_tensors='pt')

so that it gets 2 different strings as input (term1, term2), that it separates with the special separator token.

How can I use the huggingface pipeline for input like this?
If I load the pipeline like this:

tokenizer_xlmr = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base")
my_pipeline = pipeline("sentiment-analysis", model=my_model, tokenizer=tokenizer_xlmr)
my_pipeline([term1,term2])

I get two separate predictions instead of one. Thanks in advance for any type of help!

Further information:
calling
my_pipeline(term1+"</s></s>"+term2)
gives nearly the same output as using the trained model directly on the data from the encoded_dict as defined above. The labels are the same, but the probability differ from the 5th/6th decimal place onwards. Why is that?