After autotrain and push the files to the repo, there is no config file

Hi there,
I used below command and in the colab I had the project folder which is same as the huggingface repo I pushed. But when I tried to use it . I got the error.

!autotrain llm --train --project_name my-llm-test --model meta-llama/Llama-2-7b-hf --data_path test --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft --push_to_hub --repo_id youshikyou/ly_ai

from transformers import AutoModelForSeq2SeqLM,AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("youshikyou/ly_ai", use_auth_token=True)

Would anyone know how to fix it?

Hey there, were you able to solve this problem?

Hi, unfortunately I didn’t solve it.

This is because you are fine-tuning and have trained only the adapter trainers, not the entire model. You need to combine the adapter weights back with the base model using peft to get the config

model_name = ‘mistralai/Mistral-7B-v0.1’
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
model_name,
return_dict=True,
torch_dtype=torch.float16,
)

tokenizer = AutoTokenizer.from_pretrained(model_name)

ft_model = PeftModel.from_pretrained(base_model, “mistral-finetune_code/checkpoint-75”)
model2 = ft_model.merge_and_unload()
model2.save_pretrained(‘.’)
tokenizer.save_pretrained(‘.’)

3 Likes