Hi,
for my model I tokenize my input like this:
encoded_dict = xlmr_tokenizer(term1, term2, max_length=max_len, padding='max_length',truncation=True, return_tensors='pt')
so that it gets 2 different strings as input (term1, term2), that it separates with the special separator token.
How can I use the huggingface pipeline for input like this?
If I load the pipeline like this:
tokenizer_xlmr = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base")
my_pipeline = pipeline("sentiment-analysis", model=my_model, tokenizer=tokenizer_xlmr)
my_pipeline([term1,term2])
I get two separate predictions instead of one. Thanks in advance for any type of help!