Endpoint not returning stop token on mistral models

Hi,

I’m having issues with my endpoint not returning the end of text token (<|im_end|>). When testing the model locally (using llama.cpp) I have to specify to ignore the EOS but stop generating when finding the stop sequence (<|im_end|>) and that works perfect. Is there a similar option in the endpoint? I could not find that.

I have to say that this was working just find a few days ago (I was getting the stop token), I know because I have an exception in my code that’s looking for that token and try to request a new inference (assuming the text might be incomplete). Did anything change it the code?

I think this might be a problem with the transforms pipeline, I was able to replicate in a small script, if I run model.generate, I get the stop token but if I run pipeline() then I get the same (similar) result but without the token. Here’s the script (stolen from TheBloke’s sample script)

model_name_or_path = "TheBloke/OpenHermes-2-Mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             revision="main")

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

prompt = "Tell me about AI"
system_message = "You are a useful AI assistant."

prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
    top_k=40,
    repetition_penalty=1.1
)

print(pipe(prompt_template)[0]['generated_text'])

trying to catch the problem here, seems this is the culprit:

https://github.com/huggingface/transformers/blame/main/src/transformers/pipelines/text_generation.py#L292

If you flip skip_special_tokens to False, we get the stop sequence in the end. I’ll make a comment in the github repo.