Hello,
This might be slightly off-topic but I decided to write a question here in case anything helpful can come out.
I have a block of code that makes a use of HuggingFace Transformer models.
I can execute this my code on Amazon Web Services, so I don’t think that there is any syntax/semantic errors in my code.
However, when I run the same code on my university server, I am keep getting the following error:
Traceback (most recent call last):
File "/home/h56cho/projects/def-schonlau/h56cho/GPT2.py", line 505, in <module>
main_function('/home/h56cho/projects/def-schonlau/h56cho/G1G2.txt','/home/h56cho/projects/def-schonlau/h56cho/G1G2_answer_num.txt', num_iter)
File "/home/h56cho/projects/def-schonlau/h56cho/GPT2.py", line 439, in main_function
gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1623, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/file_utils.py", line 948, in cached_path
output_path = get_from_cache(
File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/file_utils.py", line 1124, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
I highly doubt that the error is due to the internet connection, so this may has to do with the “cached path”. Can any of your team member suggest me how to solve this issue or why this error is popping up?
Thank you,