I’m having issues with my endpoint not returning the end of text token (<|im_end|>). When testing the model locally (using llama.cpp) I have to specify to ignore the EOS but stop generating when finding the stop sequence (<|im_end|>) and that works perfect. Is there a similar option in the endpoint? I could not find that.
I have to say that this was working just find a few days ago (I was getting the stop token), I know because I have an exception in my code that’s looking for that token and try to request a new inference (assuming the text might be incomplete). Did anything change it the code?
I think this might be a problem with the transforms pipeline, I was able to replicate in a small script, if I run model.generate, I get the stop token but if I run pipeline() then I get the same (similar) result but without the token. Here’s the script (stolen from TheBloke’s sample script)
model_name_or_path = "TheBloke/OpenHermes-2-Mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
system_message = "You are a useful AI assistant."
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
# Inference can also be done using transformers' pipeline
pipe = pipeline(
trying to catch the problem here, seems this is the culprit:
If you flip
skip_special_tokens to False, we get the stop sequence in the end. I’ll make a comment in the github repo.