from transformers import AutoTokenizer, MistralForCausalLM
import torch
model_id = “mistralai/Mistral-Small-3.1-24B-Instruct-2503”
tokenizer = AutoTokenizer.from_pretrained(model_id,
trust_remote_code=True,
cache_dir=“/content/huggingface_cache”)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = “right”
model = MistralForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map=“auto”,
cache_dir=“/content/huggingface_cache”,
low_cpu_mem_usage=True,
offload_folder=“offload”,
)
I have used Older_version = transformers==4.49.0 and Current_version = transformers==4.52.0.dev0, I tried both version but didn’t gets the solution.
Please help us! Thanks