Fine-tune conversational model


I’m totally new to transformers. I’ve got a conversational model of Microsoft’s GODEL working, but I a totally green on how to fine-tune it using my own data.

According to GODEL’s github page, the data format should be like this for training:

“Context”: “Please remind me of calling to Jessie at 2PM.”,
“Knowledge”: “reminder_contact_name is Jessie, reminder_time is 2PM”,
“Response”: “Sure, set the reminder: call to Jesse at 2PM”

So I’ve got a list in Python constructed of several context/knowledge/responses. The problem is I have no idea how to actually “train” or “fine-tune” the transformer model that is in ~/.cache/huggingface/hub. GODEL’s github page seems to provide a script, but I think that’s for training the model if you were to clone the repository and use it that way - not for training the model used through transformers in Python.

Can someone please point me in the right direction? I’ve read the ‘tutorial’ page on this but I’m still rather confused.


I am on the same boat, the documentation on how to fine tune a conversational model its not very clear.


I am in the process of finetuning Godel as well. What I found helpful was in the authors paper they outline an example of input with this, “The dialog context S and environment
E are concatenated as a long sequence, which is
the input to the model”.

@chadwick-mcmonagle did you figure out how to fine-tune it or what format should the data be? I am trying to do the same thing and the documentation is not very clear on the subject. If you made any progress understanding what to do, please share your findings.

@daliselmi unfortunately not.

1 Like

Does anybody find out how to preprocess the text for fine tuning?