Access issues for gated repos

Hello Folks,

I am trying to use Mistral for a usecase on the hugging face mistral page I have raised a reuqest to get access to gated repo which I can see in my gated repos page now.

But the moment I try to access it on my local machine, it gives the below error -

OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2. 401 Client Error. (Request ID: Root=1-66c6c127-582a274a30b062992d60c0e2;69f66232-83a8-449a-b7ba-b89f7176a5cd) Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json. Access to model mistralai/Mistral-7B-Instruct-v0.2 is restricted. You must be authenticated to access it.

Can someone please help me understand why this can happen?

hi @sanchitamore

Did you create an access token? Did you add a permission for relevant repository?

You can check from Hugging Face – The AI community building the future..
(Edit permissions → Repositories permissions)

Don’t forget to add token parameter when you call the function:

or use huggingface-cli login

Thanks @mahmutc.

I did add an access token and the permissions are set for the repositories.
Access token currently has all permissions read & write.

I am able to successfully login through huggingface-cli login as well.

In the gated repositories it shows I have access to this repo.

Can you please specify what should be the exact permissions for the same.

Here is my code snippet

from transformers import pipeline 

class LLM:
    def __init__(self, model_name, auth_token=None):
        self.model = pipeline('text2text-generation', model=model_name, use_auth_token=auth_token)

    def predict(self, prompt, **kwargs):
        return self.model(text_inputs=prompt, **kwargs)[0]["generated_text"]

model = LLM(model_name="mistralai/Mistral-7B-Instruct-v0.3", auth_token="")

hi @sanchitamore
This should work. In fact, if you log in, you don’t even need token parameter.

from transformers import pipeline 

class LLM:
    def __init__(self, model_name, auth_token=None):
        self.model = pipeline('text2text-generation', model=model_name)

    def predict(self, prompt, **kwargs):
        return self.model(text_inputs=prompt, **kwargs)[0]["generated_text"]

model = LLM(model_name="mistralai/Mistral-7B-Instruct-v0.3")

Can you please run huggingface-cli whoami and double check repo permissions from Hugging Face – The AI community building the future. (Edit permissions → Repositories permissions)

You have to see repo name in the list:

It doesn’t matter whether you are logged in or not, this will work:

from transformers import pipeline 

class LLM:
    def __init__(self, model_name, token=None):
        self.model = pipeline('text2text-generation', model=model_name, token=token)

    def predict(self, prompt, **kwargs):
        return self.model(text_inputs=prompt, **kwargs)[0]["generated_text"]

model = LLM(model_name="mistralai/Mistral-7B-Instruct-v0.3", token="your_token_should_be_here")