I am trying to upload our bert model for stack-overflow domain. I have used the transformer-cli upload command and I can access the uploaded files and see them .
However in in the https://huggingface.co/ web page I am always getting the errors that config,json is not found
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/tokenizer_config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/special_tokens_map.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/modelcard.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/README.md
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/merges.txt
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/pytorch_model.bin
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/vocab.json
Could it be that the /bert/ should be /bart/ in the s3 URL?
@julien-c worth noting
the model seems to have uploaded correctly since the below code runs without any error
In [1]: from transformers import AutoModelForSequenceClassification
In [2]: model = AutoModelForSequenceClassification.from_pretrained('bewgle/bart-large-mnli-bewgle')
I’m getting this same error with a model that I just uploaded. My model files are here. If I try to use the same model files that I have saved locally, the tokenizer and the model load just fine. But when I try to download the model like:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword")
model = AutoModelForSeq2SeqLM.from_pretrained("a1noack/bart-large-gigaword")
I get the following error:
OSError: Can't load config for 'a1noack/bart-large-gigaword'. Make sure that:
- 'a1noack/bart-large-gigaword' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'a1noack/bart-large-gigaword' is the correct path to a directory containing a config.json file
So the download does work now at least with transformers 4.1.1. Thank you.
I guess it’s the case though that models that were uploaded to the model hub using the new model hub system cannot be downloaded using versions of transformers <3.5.0.
Does this mean that there is no way to make this downloading of new models backwards compatible with older versions of transformers? It would be great if I could download all models–including those that were uploaded using the new hub system–from the hub using from_pretrained with transformers <3.5.0.
Yes you are correct. We were periodically backporting new models to the old system (S3 bucket), however it is costly and tedious, so we decided to stop.
You can still git clone your new model and load it in transformers <3.5.0 using from_pretrained – is this a potential workaround for you?