Hi @Rocketknight1 is see that you added the chat_template data for the LlaMA-2 models. There appears to be a bug in that logic where if you only pass in a system prompt, formatting the template returns an empty string/list. For example, the below code results in printing an empty string:
chat = [
{"role": "system", "content": "You are a helpful and honest assistant."},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
However, if you edit it to have an empty user object, then it will output the system prompt with the empty user input (which in llaMA comes with an appended â[/INST]â
chat = [
{"role": "system", "content": "You are a helpful and honest assistant."},
{"role": "user", "content": ""},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
This is a problem for scenarious where I only want to retrieve the LlaMA formatted system prompt.
Hi @njbrake, thanks for the notification! Weâve been able to reproduce the issue here.
Partly, this is caused by us not testing that case, because (I believe) LLaMA-2 was never trained with ânakedâ system messages like this. However, youâre right that the template should support it properly. Iâll see if I can push a fix soon!
Regarding the incorrect link in the LLaMA documentation, though, thatâs an issue youâll have to take up with the repository maintainers rather than Hugging Face! Try opening an issue on the repo to alert them.
Hi @njbrake, can you try this out, both with just a single system message and with more complex conversations? It should yield the same results as the old template in most cases, but should give proper output when thereâs just a single system message now:
tokenizer.chat_template = (
"{% if messages[0]['role'] == 'system' %}"
"{% set loop_messages = messages[1:] %}" # Extract system message if it's present
"{% set system_message = messages[0]['content'] %}"
"{% elif USE_DEFAULT_PROMPT == true and not '<<SYS>>' in messages[0]['content'] %}"
"{% set loop_messages = messages %}" # Or use the default system message if the flag is set
"{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
"{% else %}"
"{% set loop_messages = messages %}"
"{% set system_message = false %}"
"{% endif %}"
"{% if loop_messages|length == 0 and system_message %}" # Special handling when only sys message present
"{{ bos_token + '[INST] <<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n [/INST]' }}"
"{% endif %}"
"{% for message in loop_messages %}" # Loop over all non-system messages
"{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
"{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
"{% endif %}"
"{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
"{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}"
"{% else %}"
"{% set content = message['content'] %}"
"{% endif %}"
"{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
"{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
"{% elif message['role'] == 'system' %}"
"{{ '<<SYS>>\\n' + content.strip() + '\\n<</SYS>>\\n\\n' }}"
"{% elif message['role'] == 'assistant' %}"
"{{ ' ' + content.strip() + ' ' + eos_token }}"
"{% endif %}"
"{% endfor %}"
)
After you run that block, try apply_chat_template and it should have the new behaviour!
Hi @Rocketknight1 thanks for the quick response! Looks like that didnât quite do it. Now I get a response but it still has the [/INST] at the end of it:
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible.
<</SYS>>
[/INST]
I think it shouldnât have the newline of [/INST] at the end?
Hi @njbrake - we talked this over this a bit more. In the end, we decided that actually we just shouldnât be trying to do this! LLaMA enforces a strict rule that chats should alternate user/assistant/user/assistant, and the system message, if present, should be embedded into the first user message. As a result, itâs not clear how the chat template should even handle the case of a solo system message with no user message. We suspect that no matter what you do here, model performance will probably be damaged because the input is different from the formats the model was trained with.
My recommendation would be to incorporate a user message as well as a system message. Alternatively, you can try a different model. Several new models like Zephyr have excellent performance and have simpler handling of system messages, as well as lifting the strict message ordering requirement of LLaMA.
Hi @Rocketknight1 Unfortunately, itâs a common use case for RAG frameworks like llama-index, that require a system_prompt be returned without any user input yet.
I havenât checked but I would imagine that langchain would have a similar parameter. I think they may copy their own definitions of the llama system prompt format, which I can use, but I was hoping to be able to use the huggingface chat_template to access the system prompt formatting.
If you can find out the system prompt format they use, I can help write a chat template to get that to work for you. Iâm still uncertain about updating the official LLaMA template, though - even if llama-index expects it, it (imo) violates the principle that the chat template should preserve the format used in training.
Still, maybe itâs better than just throwing an error. Let me know if you can find the prompt format they use for a solo system message!
So in that case I think I agree with you, no need to change the chat_template to support my use case, but maybe throwing an error if someone tries to do what I was doing would be a nice feature add. It was confusing for the function to return blank instead of offering some sort of feedback about why it was not working.