Whether upon trying the inference API or running the code in “use with transformers” I get the following long error:
“Can’t load tokenizer using from_pretrained, please update its configuration: Can’t load tokenizer for ‘remi/bertabs-finetuned-extractive-abstractive-summarization’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘remi/bertabs-finetuned-extractive-abstractive-summarization’ is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.”
This doesn’t just apply to this specific model but for many models I have tried to run. Is there any solution for this?
@EssamWisam How were you able to solve this problem. I am facing the same issue. The code is working in one of the GPUs but when i try to run it azure gpu i am getting this issue
I got into the reported error above while following the guide Causal language modeling, I tried ONLY step 3 in my notebook and it worked - I can load the model and do inference on it, on hub and local as well, that means the tokenizer was missing, trainer.push_to_hub() did not generate/push it to hub. Thank you!