I’m getting this same error with a model that I just uploaded. My model files are here. If I try to use the same model files that I have saved locally, the tokenizer and the model load just fine. But when I try to download the model like:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword")
model = AutoModelForSeq2SeqLM.from_pretrained("a1noack/bart-large-gigaword")
I get the following error:
OSError: Can't load config for 'a1noack/bart-large-gigaword'. Make sure that:
- 'a1noack/bart-large-gigaword' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'a1noack/bart-large-gigaword' is the correct path to a directory containing a config.json file
What was actually done to fix the problem?