I’m totally new to transformers. I’ve got a conversational model of Microsoft’s GODEL working, but I a totally green on how to fine-tune it using my own data.
According to GODEL’s github page, the data format should be like this for training:
“Context”: “Please remind me of calling to Jessie at 2PM.”,
“Knowledge”: “reminder_contact_name is Jessie, reminder_time is 2PM”,
“Response”: “Sure, set the reminder: call to Jesse at 2PM”
So I’ve got a list in Python constructed of several context/knowledge/responses. The problem is I have no idea how to actually “train” or “fine-tune” the transformer model that is in ~/.cache/huggingface/hub. GODEL’s github page seems to provide a script, but I think that’s for training the model if you were to clone the repository and use it that way - not for training the model used through transformers in Python.
Can someone please point me in the right direction? I’ve read the ‘tutorial’ page on this but I’m still rather confused.