Getting key-error:mistral when using autotrain

when i try to fine tune a mistral model using autotrain using

 autotrain llm --train --project_name "medai_ft" --model TheBloke/Mistral-7B-OpenOrca-GGUF^C-data_path medalpaca/medical_meadow_medqa --text_column text --use_peft --use_int4 --learning_rate 2e-5 --train_batch_size 4 --num_train 5 --model_max_length 4096

i get an error

> ERROR   train has failed due to an exception:
> ERROR   Traceback (most recent call last):
  File "/home/divanshu/localGPT/venv/lib/python3.11/site-packages/autotrain/utils.py", line 280, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/divanshu/localGPT/venv/lib/python3.11/site-packages/autotrain/trainers/clm/__main__.py", line 78, in train
    tokenizer = AutoTokenizer.from_pretrained(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/divanshu/localGPT/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained
    config = AutoConfig.from_pretrained(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/divanshu/localGPT/venv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1039, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
                   ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/divanshu/localGPT/venv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 734, in __getitem__
    raise KeyError(key)
KeyError: 'mistral'

can anyone help me

1 Like

I also met this issue. Solved by update my transformers package to the latest version.

3 Likes