Problem while uploading a file

I am trying to upload our bert model for stack-overflow domain. I have used the transformer-cli upload command and I can access the uploaded files and see them .

However in in the https://huggingface.co/ web page I am always getting the errors that config,json is not found

image

Any help is really appreciated. Thanks.

2 Likes

Maybe @julien-c has a solution.

I’m having the same issue.
Though i have uploaded the config.json, I am getting the same error

These are the uploaded files

following is the logs from the upload command

https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/tokenizer_config.json
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/special_tokens_map.json
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/config.json
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/modelcard.json
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/README.md
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/merges.txt
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/pytorch_model.bin
Your file now lives at:                                                                                                                        
https://s3.amazonaws.com/models.huggingface.co/bert/bewgle/bart-large-mnli-bewgle/vocab.json

Could it be that the /bert/ should be /bart/ in the s3 URL?

@swarajraibagi, I am not sure if there is anyway to change the urls. how you did that?

@julien-c, It will be really great if you can provide some solution/insight into this problem. I am kind of stuck for days :frowning:

@jeniya I didn’t change the url, I was wondering if that is contributing to the bug.

1 Like

@julien-c worth noting
the model seems to have uploaded correctly since the below code runs without any error

In [1]: from transformers import AutoModelForSequenceClassification

In [2]: model = AutoModelForSequenceClassification.from_pretrained('bewgle/bart-large-mnli-bewgle')
1 Like

Looking into it with @lysandre, should be fixed tomorrow.

Should be fixed now.

3 Likes

it’s working as expected now
Thanks !

I’m not sure if it’s related but it seems like the model does not load in exbert, for example this one.

1 Like

Its working now! Thanks :smiley:

I’m getting this same error with a model that I just uploaded. My model files are here. If I try to use the same model files that I have saved locally, the tokenizer and the model load just fine. But when I try to download the model like:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword")
model = AutoModelForSeq2SeqLM.from_pretrained("a1noack/bart-large-gigaword")

I get the following error:

OSError: Can't load config for 'a1noack/bart-large-gigaword'. Make sure that:
- 'a1noack/bart-large-gigaword' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'a1noack/bart-large-gigaword' is the correct path to a directory containing a config.json file

What was actually done to fix the problem?

Do you have any suggestions @julien-c?

Yes Exbert model loading was never automatic, it’s a manual process that needs to be done by the Exbert authors.

@a1noack which transformers version are you running? You should be on >=3.5.0 for the new model hub system to work.

Can you try this and let me know if it solves your problem? Thanks!

1 Like

So the download does work now at least with transformers 4.1.1. Thank you.
I guess it’s the case though that models that were uploaded to the model hub using the new model hub system cannot be downloaded using versions of transformers <3.5.0.
Does this mean that there is no way to make this downloading of new models backwards compatible with older versions of transformers? It would be great if I could download all models–including those that were uploaded using the new hub system–from the hub using from_pretrained with transformers <3.5.0.

Yes you are correct. We were periodically backporting new models to the old system (S3 bucket), however it is costly and tedious, so we decided to stop.

You can still git clone your new model and load it in transformers <3.5.0 using from_pretrained – is this a potential workaround for you?