I have used spaces → docker → autotrain to finetune mistralai/Mistral-7B-Instruct-v0.1 with a csv file containing a text column and formatting that follow the following params:
[INST]Instruction[/INST]Expected reply
After training is successful I use spaces → docker → ChatUI to launch a UI for inference of my finetuned model.
The space keeps erroring out because I don’t have a config.json file.
Here’s a link to my model: ramon1992/Mistral-7B-JB-Instruct-3-0
Is there someone who could use my model and create a ChatUI space with it just to see if it works?
1 Like
When running the below script:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(“ramon1992/Mistral-7B-JB-Instruct-3-0”)
model = AutoModelForCausalLM.from_pretrained(“ramon1992/Mistral-7B-JB-Instruct-3-0”)
input_ids = tokenizer.encode(“Your input text here”, return_tensors=‘pt’)
output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
I get the following error:
ramon1992/Mistral-7B-JB-Instruct-3-0 does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.