Fine tuning llm model

I want to fine tune llm model for QA task.I have a domain specific document on which I want the the model to answers questions.but I do not have question answer pair to fine tune the model. is there other way to fine tune without having question answer pair?

2 Likes

Unfortunately, no.

To get the expected results you need to train it in the similar way as you are going to use it (supervised fine-tuning and reinforcement learning). If you are going to ask questions and get answers, you need to train the LLM with pairs “question-answer”. If you want to teach how to summarize the text, so you need to prepare “text-summary” examples.

But… you can use “elder brothers” - other big LLMs, which can prepare these datasets with questions for you. Of cause, if this is allowed by their licences (OpenAI does not permit this, but most modern datasets were prepared with the help of ChatGPT-4).

If you train LLM on the original text, (unsupevised fine-tuning) it will speaks exactly as the original text, so you can start with the first word of the text, and LLM will try to continue it, sometimes it will dive into the infinite loop on some sentence.

Hello everyone, i have same Question, i want to fine tune LLM Model on pdf file where my file contain bullet points, tables, page number etc in simple like wikipedia, when i complete steps from extraction to fine tunning it give me error in accuracy and does not generate required answer, is it possible to generate answer from such kind of pdf file or i need to prepare QA-pairs , plz need your help