Hello everyone:
I have Alpaca instruction fine tuned model. I want to serve this in the Chat Space which I have deployed. But I suspect prompt template is not accurate as I am getting difference answers. Could someone please help me with that?
My template is as following:
{pre_prompt}
### Təlimat:
{question}
### Cavab:
{model_answer_goes_here}
See here for example prompt termplates for Chat UI: chat-ui/PROMPTS.md at main · huggingface/chat-ui · GitHub
If needed my BOS and EOS are “”,
Current prompt I am using is as follows:
Aşağıda tapşırığı təsvir edən təlimat və əlavə kontekst təmin edən giriş verilmiştir. Sorğunu uyğun şəkildə tamamlayan cavab yazın. ### Təlimat: {{#each messages}}{{#ifUser}}{{content}} </s>{{/ifUser}}{{#ifAssistant}}{{content}}{{/ifAssistant}}{{/each}} ### Cavab:
Thank you.