Robertaforquestionanswering

I am a newbie to huggingface/transformers…

I tried to follow the instructions at https://huggingface.co/transformers/model_doc/roberta.html#robertaforquestionanswering to try out this model but I get errors.

from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForQuestionAnswering.from_pretrained('roberta-base')

inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss, start_scores, end_scores = outputs[:3]

The tokenizer is loaded but the error is in trying to load the QA model.

Traceback (most recent call last):
File “/gstore/home/madabhuc/.local/lib/python3.6/site-packages/transformers/modeling_utils.py”, line 655, in from_pretrained
raise EnvironmentError
OSError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “test.py”, line 7, in
model = RobertaForQuestionAnswering.from_pretrained(‘roberta-base’)
File “/gstore/home/madabhuc/.local/lib/python3.6/site-packages/transformers/modeling_utils.py”, line 662, in from_pretrained
raise EnvironmentError(msg)
OSError: Can’t load weights for ‘roberta-base’. Make sure that:

  • ‘roberta-base’ is a correct model identifier listed on ‘https://huggingface.co/models

  • or ‘roberta-base’ is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.

Looks like an environment issue. I was able to run the code on another machine.

1 Like