Hey, I’m playing around with finetuning Llama model. Currently I’m using datasets for reading comprehension like MS_MARCO, but I want to add to my training process datasets that are fully conversational like OpenHermes2.5. My question is what’s best approach for prompt engineering in this case. Should I add next chat template and train model using two templates or create some universal template that will fit both types of datasets?
Fyi here’s my chat template for reading comprehension:
Answer the given question based on the given context
### Context: {context}
### Question: {question}
### Answer: