Problem with AutoTokenizer

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_path = "./raphael"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)

# Example prompts
prompts = [
    "What is your name?",
    "Who are you?",
    "Do you know Raphael",
]

# Tokenize and generate responses
for prompt in prompts:
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"]
    outputs = model.generate(input_ids, max_length=100)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(f"Prompt: {prompt}")
    print(f"Response: {response}\n")

This my code when using AutoTokenizer. and It gives error. Exception: Error while initializing BPE: Token _</w> out of vocabulary
but same program works fine when I use BlenderbotSmallTokenizer in place of AutoTokenizer.

from transformers import BlenderbotSmallForConditionalGeneration, BlenderbotSmallTokenizer

model_path = "./raphael"
model = BlenderbotSmallForConditionalGeneration.from_pretrained(model_path)
tokenizer = BlenderbotSmallTokenizer.from_pretrained(model_path)

What is exactly the problem?

Is it even possible to write errorless code for this library. I’m always getting some or the other errors. tired of all this shit.