I am trying to fine-tune mistral model, but unfortunatly its give an error. i use this code:
from transformers import AutoTokenizer
model_id = “mistralai/Mistral-7B-v0.1”
tokenizer = AutoTokenizer.from_pretrained(model_id,token=“my_token”)
the error:
UnexpectedStatusException: Error for Training job huggingface-qlora-jailbreaks-mistralai–2024-08-22-14-23-02-625: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
ExitCode 1
ErrorMessage “raise EnvironmentError(
OSError: mistralai/Mistral-7B-v0.1 is not a local folder and is not a valid model identifier listed on ‘https://huggingface.co/models’
If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token
or log in with huggingface-cli login
and pass use_auth_token=True
.”
Command “/opt/conda/bin/python3.10 run_qlora.py --bf16 True --dataset_path /opt/ml/input/data/training --gradient_accumulation_steps 2 --gradient_checkpointing True --learning_rate 0.0002 --logging_steps 10 --lr_scheduler_type constant --max_grad_norm 0.3 --merge_adapters True --model_id mistralai/Mistral-7B-v0.1 --num_train_epochs 1 --output_dir /tmp/run --per_device_train_batch_size 6 --save_strategy epoch --tf32 True --use_flash_attn True --warmup_ratio 0.03”, exit code: 1