Im using a pre-trained tokeniser and a fine-tuned model.
With the same model, when I create pipeline, my results differ both the time.
tokenizer = AutoTokenizer.from_pretrained("huggingface/CodeBERTa-small-v1") mymodelclf = TextClassificationPipeline(model=model, tokenizer = tokenizer,return_all_scores = True ) print (mymodelclf( negative_samples ) )
The same code is run twice, but 2 different results are obtained. Is this expected?