Weirdly enough, I am not able to reproduce the results from torch.softmax(model(**tokenizer([['Thanks ', ' You']], return_tensors="pt",padding=True,truncation=True)).logits, dim=1).detach().numpy()
using the pipeline…
It seems like the issue come with the way I have set up the tokenizer but I can’t seem to find what I am doing wrong. Would you happen to know what it could be?