The following Python script using Dolphin3.0 works well:
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_name = "cognitivecomputations/Dolphin3.0-Llama3.2-3B"
chat = [
{"role": "system", "content": "You are a philosopher"},
{"role": "assistant", "content": "Hello, Eve!"},
{"role": "user", "content": "Hello, Adam!"}
]
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
response = pipe(chat, max_new_tokens=32)
print(response[0]['generated_text'][-1]['content'])
However if I change the model name to:
model_name = "ObsidianLite/Euryale-1.3-Small-7B"
I encounter an error at the line where pipe is called:
ValueError: Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at Chat Templates
Can someone help me correct my script?
Thank you!