Multi-task instruction fine-tuning

Hello everyone,

I have a question regarding supervise instruction fine-tuning LLM models. Most of the examples and documentation I’ve come across focus on single-task instruction fine-tuning. I am wondering if it’s possible to perform multi-task instruction fine-tuning, for instance, using two or three different tasks concurrently with some of these models, such as Llama2, Falcon, etc.?

Any insights or guidance would be greatly appreciated. Thank you!