mistralai/Mistral-7B-v0.1 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

I am trying to fine-tune mistral model, but unfortunatly its give an error. i use this code:

from transformers import AutoTokenizer

model_id = “mistralai/Mistral-7B-v0.1”
tokenizer = AutoTokenizer.from_pretrained(model_id,token=“my_token”)

the error:

UnexpectedStatusException: Error for Training job huggingface-qlora-jailbreaks-mistralai–2024-08-22-14-23-02-625: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
ExitCode 1
ErrorMessage “raise EnvironmentError(
OSError: mistralai/Mistral-7B-v0.1 is not a local folder and is not a valid model identifier listed on ‘https://huggingface.co/models
If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.”
Command “/opt/conda/bin/python3.10 run_qlora.py --bf16 True --dataset_path /opt/ml/input/data/training --gradient_accumulation_steps 2 --gradient_checkpointing True --learning_rate 0.0002 --logging_steps 10 --lr_scheduler_type constant --max_grad_norm 0.3 --merge_adapters True --model_id mistralai/Mistral-7B-v0.1 --num_train_epochs 1 --output_dir /tmp/run --per_device_train_batch_size 6 --save_strategy epoch --tf32 True --use_flash_attn True --warmup_ratio 0.03”, exit code: 1

Some models on HuggingFace are gated, which essentially means that you must be authorized to use the model.

If you have already been granted access to the model, just make sure you use your HuggingFace API key when running the code. Otherwise, go visit the model page and request for permission. Once you have been granted access, you’ll see a notification like the one in the image below.

“Gated model: You have been granted access to this model.”