Trying to understand system prompts with Llama 2 and transformers interface

@dkettler this is how I got mine working:

<<SYS>>
You're are a helpful Assistant, and you only response to the "Assistant"
Remember, maintain a natural tone. Be precise, concise, and casual. Keep it short\n
<</SYS>>
{conversation_history}\n\n
[INST]
User:{user_message}
[/INST]\n
Assistant:

And in your model pass the stop tokens

model(prompt=prompt, max_tokens=120, stop=["[INST]", "None", "User:"]
                   )

You can also follow this blog post here

2 Likes