How to get answer with RobertaForQuestionAnswering

Dear list,

What I like to do is to pretrain a model and finetune it for Q&A with Squad dataset. I trained a model as a roberta after reading the blog on huggingface site.

And I would like to make it as a Q&A model, I tried to do finetuning following this document: Fine-tuning with custom datasets — transformers 4.11.3 documentation. ( I changed Distilbert to Roberta to match the pretrained model and fine tuning training)

As a result, I’ve got a fine tuned model and want to test on it. So, I loaded model as described in this document: RoBERTa — transformers 4.11.3 documentation

However, the output doesn’t show the ‘answer’ that I am looking for. Could you tell me how I can get the answer? I’ve searched the code but I couldn’t find it.

Thank you in advance.

Best regards,
Seungho

Let’s take an example with an already-finetuned RoBERTa model from the hub.

from transformers import RobertaTokenizer, RobertaForQuestionAnswering

model_name = "deepset/roberta-base-squad2"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = RobertaForQuestionAnswering.from_pretrained(model_name)

question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors="pt")

# forward pass
outputs = model(**encoding)

predicted_start_idx = outputs.start_logits.argmax(-1).item()
predicted_end_idx = outputs.end_logits.argmax(-1).item()

# decode
predicted_answer = tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1])
print(predicted_answer)

xxxForQuestionAnswering models output start_logits and end_logits, indicating which token the model thinks is at the start of the answer, and which token is at the end of the answer. Both are of shape (batch_size, sequence_length). To get the highest score (logit), we do an argmax on the last dimension . Next, we use the decode method of the tokenizer to turn the predicted token IDs back to text.