Inference from a fine-tuned model -- help with interpretation of results

Hello

Noob question:
I fine-tuned a Bert model for the quora question pairs data. I successfully saved it.

My question is how do I use it for inference. My understanding is if I load it in a pipeline, I should be able to pass in a list of sentences to get either a 0 or 1 (similar or dissimilar). However I get two scores, which is confusing to me.

Please see my code snippet below. Any help is appreciated.

from transformers import pipeline, BertForSequenceClassification
predtok = AutoTokenizer.from_pretrained(“./quora-finetuned.bin/”)
model = AutoModelForSequenceClassification.from_pretrained(“./quora-finetuned.bin”, num_labels=2)

sim_model = pipeline(“text-classification”, model=model, tokenizer=predtok)

sim_model([“i like you”, “i hate you”])

This yields the following output:
[{‘label’: ‘LABEL_0’, ‘score’: 0.8931376934051514},
{‘label’: ‘LABEL_0’, ‘score’: 0.9227051734924316}]

which appears to be a score for each question, and not the question pair! How can I fix this?

Never mind, i figured it out

Hey!

For curiosity, did you change your code to look like this instead:

predictions = sim_model([“i like you”, “i hate you”])
mapped_predictions = [{‘label’: 0 if pred[‘label’] == ‘LABEL_0’ else 1, ‘score’: pred[‘score’]} for pred in predictions]
print(mapped_predictions)

result = sim_model(f"{q1} {q2}", padding=True, truncation=True)
print(result)