I had the same error message as @aswincandra , but I was just loading a GPTNeo model:
File "<userDir>/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1285, in from_pretrained
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'EleutherAI/gpt-neo-2.7B' at '<userDir>/.cache/huggingface/transformers/0839a11efa893f2a554f8f540f904b0db0e5320a2b1612eb02c3fd25471c189a.a144c17634fa6a7823e398888396dd623e204dce9e33c3175afabfbf24bd8f56'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I tried setting from_tf
to True
, and got:
404 Client Error: Not Found for url: https://huggingface.co/EleutherAI/gpt-neo-2.7B/resolve/main/tf_model.h5
Traceback (most recent call last):
File "<userDir>/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1253, in from_pretrained
resolved_archive_file = cached_path(
File "<userDir>/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1370, in cached_path
output_path = get_from_cache(
File "<userDir>/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1541, in get_from_cache
r.raise_for_status()
File "<userDir>/.local/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/EleutherAI/gpt-neo-2.7B/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<userDir>/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 384, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "<userDir>/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1270, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'EleutherAI/gpt-neo-2.7B'. Make sure that:
- 'EleutherAI/gpt-neo-2.7B' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'EleutherAI/gpt-neo-2.7B' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
torch version 1.9.0.
transformers version 4.9.2
python 3.8.0
I’m running this over SSH, if it matters, so I’m not sure what kind of configuration the remote machine has. I can’t run it on my machine because I don’t have enough RAM
I tried to run
state_dict = torch.load(path_to_pytorch_bin_file, map_location="cpu")
But I’m not sure if this applies to me since I didn’t train my own model.