LLaMa2 fine-tuning: Multi-turn conversation dataset template

I want to fin-tuning LLama2-chat-hf to be a questionnaire conductor chatbot. so the chat_history is very important for training. I have dataset of many completion between interviewer and interviewee. How should i preprocess the dataset for training? what prompt template should i use?

This might be a helpful link: https://medium.com/@xuebinbin12/fine-tuning-chat-based-llm-with-multi-turn-conversational-data-part-i-d8c64d01a20d

Try masking the context to only train on the response.

Did you find a solution to your question? As I face similar considerations for my use case and I am also not sure how to prepare the dataset for a conversational chatbot which I want to fine-tune on my question-answer dataset