OSError: Model name 'gpt2' was not found in tokenizers model name list (gpt2,...)

I’m trying to replicate part of the transformers tutorial from fastai, and there is a place one writes:

from transformers import GPT2LMHeadModel, GPT2TokenizerFast
pretrained_weights = 'gpt2'
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
model = GPT2LMHeadModel.from_pretrained(pretrained_weights)

However, trying to run it I get

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-31-b475580d46e5> in <module>
      1 from transformers import GPT2LMHeadModel, GPT2TokenizerFast
      2 pretrained_weights = 'gpt2'
----> 3 tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
      4 model = GPT2LMHeadModel.from_pretrained(pretrained_weights)

/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
   1589                     ", ".join(s3_models),
   1590                     pretrained_model_name_or_path,
-> 1591                     list(cls.vocab_files_names.values()),
   1592                 )
   1593             )

OSError: Model name 'gpt2' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'gpt2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt', 'tokenizer.json'] but couldn't find such vocabulary files at this path or url.

I find this confusing because gpt2 is in the list. In fact, I encounter the same problem with any transformer model I choose, like for instance distilgpt2 or from another family. In fact, if I comment out that line I also get an error

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    372             if resolved_config_file is None:
--> 373                 raise EnvironmentError
    374             config_dict = cls._dict_from_json_file(resolved_config_file)

OSError: 

During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)
<ipython-input-32-a4869c5495d6> in <module>
      2 pretrained_weights = 'gpt2'
      3 #tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
----> 4 model = GPT2LMHeadModel.from_pretrained(pretrained_weights)

/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    874                 proxies=proxies,
    875                 local_files_only=local_files_only,
--> 876                 **kwargs,
    877             )
    878         else:

/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    327 
    328         """
--> 329         config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
    330         return cls.from_dict(config_dict, **kwargs)
    331 

/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    380                 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
    381             )
--> 382             raise EnvironmentError(msg)
    383 
    384         except json.JSONDecodeError:

OSError: Can't load config for 'gpt2'. Make sure that:

- 'gpt2' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'gpt2' is the correct path to a directory containing a config.json file

Everything is run on Kaggle notebooks, in case it’s important
Thanks in advance!

Can you try to share a Google colab reproducing the error?

Hi @thomwolf thanks for the recommendation. Actually I am quite confused because in colab (https://colab.research.google.com/drive/1gFStbsnfuo2CA9cUYc3TXqUM-wICYrfK?usp=sharing) it seems to work fine, but in kaggle (https://www.kaggle.com/pabloamc/fastai-and-transformers, cell 9) it doesn’t.

maybe @abhishek (who knows a “little bit” about kaggle hahah) has an idea :slight_smile: ?

1 Like

@PabloAMC Please turn the internet on in Kaggle Kernels. I just ran the code example above and it works fine.

Hi @abhishek. You were right. Sorry for the relatively simple issue.

1 Like

Hi, I am new to Hugging Face and I am having the same issue with running this model on my local mac. I am using :
torch==1.7.0
nltk==3.4.5
colorama==0.4.4
transformers==3.4.0
torchtext==0.3.1
Could any one please help me ?

I have met the same error. I solved it because the path I stored the onnx is ./GPT2. At the beginning, I didn’t know it is case-insensitive. I just change the path name to solve this.

you should log in kaggle account, and in [setting] you should vertify you phone number, then reopen you notebook ,you can see internet switch on your left screen ~~~~