403 Client Error: Forbidden for url:

hi guys, I have this problem using paraphrase-distilroberta-base-v2 model.
once I call the model through in my python code:

model = SentenceTransformer(‘sentence-transformers/paraphrase-distilroberta-base-v2’)

this error appear:
403 Client Error: Forbidden for url: https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2/resolve/486c69f0e5395a86ef58883e1c18e475cc7b8aba/.gitattributes

same thing if I use Hosted inference API in model main page.
Some advise about this problem?

1 Like

Hi @giovanni94s,

I guess there is today a problem in the Hugging Face hub as I even can not edit a model or a datasets card in my profile. @julien-c can you help us? Thank you.

1 Like

I’m also facing the issue for a different model, both in production code and as well as the hosted API.
Cannot GET /sentence-transformers/multi-qa-MiniLM-L6-cos-v1/resolve/1de23253abdbc620a58070d408055cc9a8439375

For now, I copied the model and loaded it from the local path. I would like to know why it errored out as well.

1 Like

@sasikiran can you show me how to copy the model and load it locally?

Having the same issue with ‘sentence-transformers/all-mpnet-base-v2’. I saw there was an earlier thread on this too

Weirdly, I’m only getting it when trying to run batch on sagemaker

where I do it locally it works fine eg
model = AutoModel.from_pretrained(‘sentence-transformers/paraphrase-distilroberta-base-v2’)
and
model = AutoModel.from_pretrained(‘sentence-transformers/all-mpnet-base-v2’)

Checking the cloud log

This is an experimental beta features, which allows downloading model from the Hugging Face Hub on start up. It loads the model defined in the env var HF_MODEL_ID

Then immediately followed by the 403 error

1 Like

@giovanni94s I was able to find the cached model located at:

~/.cache/torch/sentence_transformers.

I then copied the model directory to ./models and used:

model = SentenceTransformer('./models/sentence-transformers_multi-qa-MiniLM-L6-cos-v1') to load it from the local path. I believe it should be possible to download the model if it isn’t available in the cache directory.

1 Like

Weird. For me, it failed on EC2 server. Then, I loaded the model using a local path.

1 Like

I think is a problem from server side. Let’s wait for an answer

@sasikiran @pierreguillou now the model seems to be downloaded correctly. Confirm from your side?

Yes, it’s working now on the hosted API. I believe it would work on EC2 as well but for now, I would continue loading it from local path.

1 Like

I can edit model and dataset card but I can not create a new one from the web inside my Hugging face perfil: I still get 403 Forbidden.

PS: I do it inside the HF account of a organization where I’m an admin member.


I still have same issue with sagemaker no matter what model I call it seems :frowning:

So you run it from SageMaker Studio and receive the error but if you start a batch job from your local environment it does work?

I have the same log message but I start from the local environment

Thanks,
Kate

Hey everyone, thanks for reporting the issue - it’s now fixed so you should be able to download the pretrained models as normal :slight_smile:

1 Like

Great! It Works (I can create a card for a dataset in the HF hub). Thank you @lewtun .

1 Like

I’m kicking off the batch job from a sagemaker notebook instance, but the batch job runs in sagemaker. It still fails with 403. I tried it from a different aws account with the same results. More detail in this thread Error 403 when downloading model for Sagemaker batch inference - #6 by philschmid

i’m still getting this error any advice?