Tried to download Mistral 7B but got an error message

Hi there,

I hope one of you can help me to solve my problem. Transformer Version: Version: 4.33.3.

I tried to download the new mistral modelby using the snippet posted on huggingface. But I got this error message and do not know how to fix it:

“Exception has occurred: KeyError
‘mistral’
File “C:\Users\Stefan Trauth\Desktop\LeoX\Mistral\Mistral 7B.py”, line 5, in ”

I used Snippet 1 directly from Huggingface and Snippet 2 one I normally use created by myself but got in both cases the same error message.

Snippet I:

from transformers import AutoModelForCausalLM, AutoTokenizer

device = “cuda” # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(“mistralai/Mistral-7B-Instruct-v0.1”)
tokenizer = AutoTokenizer.from_pretrained(“mistralai/Mistral-7B-Instruct-v0.1”)

messages = [
{“role”: “user”, “content”: “What is your favourite condiment?”},
{“role”: “assistant”, “content”: “Well, I’m quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I’m cooking up in the kitchen!”},
{“role”: “user”, “content”: “Do you have mayonnaise recipes?”}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors=“pt”)

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

Snippet II:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

Initialize the tokenizer and model

tokenizer = AutoTokenizer.from_pretrained(‘mistralai/Mistral-7B-v0.1’)
model = AutoModelForCausalLM.from_pretrained(‘mistralai/Mistral-7B-v0.1’)

while True:
# Get user input
user_input = input('You: ')

# Encode the input and add end of string token
input_ids = tokenizer.encode(user_input, return_tensors='pt')

# Generate a response from the model
with torch.no_grad():
    output = model.generate(input_ids, max_length=50)

# Decode the output and print the answer
answer = tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
print(f'Mistral-7B: {answer}')

# Check if the user wants to continue
cont = input('Do you want to continue? (yes/no): ')
if cont.lower() == 'no':
    break

Thanks a lot.

Mistral is not in 4.33.3 yet. Git https://github.com/huggingface/transformers main branch has it.

Mistral’s current version requires transformers minimum version 4.34.0 (there’s also 4.35.0.dev0).
Just pip install --upgrade transformers==4.34.0
or (more consistent) get it from huggingface’s git:
pip install git+https://github.com/huggingface/transformers

Please check in the last commits of the documentation that now it’s possible to use MistralForCausalLM and LlamaTokenzier:
‘’’
from transformers import MistralForCausalLM, LlamaTokenzier

tokenizer = LlamaTokenizer.from_pretrained(“/output/path”)
model = MistralForCausalLM.from_pretrained(“/output/path”)
‘’’
at [Mistral] Mistral-7B-v0.1 support (#26447) · huggingface/transformers@72958fc · GitHub

They are only present in transformers >=4.34.0 and tokenizers 0.14.1, respectively.
In addition, you can also run it with LlamaTokenzierFast if you wanna get a slight advantage.

I use them regularly and that works like a charm.

@marcelocorreia, thanks for your answer and your help. I found the problem it was for some reason Win11Pro so I tried it on Win10Pro and everything works perfectly.

Enjoy ur sunday.