LLaMa2 fine-tuning: Multi-turn conversation dataset template

I want to fin-tuning LLama2-chat-hf to be a questionnaire conductor chatbot. so the chat_history is very important for training. I have dataset of many completion between interviewer and interviewee. How should i preprocess the dataset for training? what prompt template should i use?