Multi-task instruction fine-tuning

Hello everyone,

I have a question regarding supervise instruction fine-tuning LLM models. Most of the examples and documentation I’ve come across focus on single-task instruction fine-tuning. I am wondering if it’s possible to perform multi-task instruction fine-tuning, for instance, using two or three different tasks concurrently with some of these models, such as Llama2, Falcon, etc.?

Any insights or guidance would be greatly appreciated. Thank you!


It’s been a while since you asked your question but I thought I’d answer it in case someone comes across this post later.

It’s actually a good idea to fine tune a model with several different tasks, so the model doesn’t lose the skills it had before.
For example, if you train a model exclusively on language translation, it may lose the ability to summarize texts. (catastrophic forgetting)
You can easily fine-tune your model by creating a data set that contains different tasks.

For additional information, I can highly recommend the book “Generative AI on AWS” by Chris Fregly or the Coursera course based on it, also by Chris Fregly.