Hello everyone!
I am relatively new to fine tuning and I am trying to fine tune this model “lmsys/vicuna-13b-v1.5-16k” with my dataset, which is a csv, made of just two columns and the most important one (“content”) is text taken from parsing PDF files (you can find something similar here: notebooks/examples/language_modeling.ipynb at main · huggingface/notebooks · GitHub in the “Preparing the dataset section”).
In order to test if fine tuning really works, I foillowed this tutorial for beginners and I used 1500 PDFs of the thousands I have:
It seems the fine tuning is running, but when it finishes and I try to ask a topic contained in one of the PDFs that I used, it starts hallucinanting! Even if the answer seems great and correct, it uses erroneous web links, names and events or facts!
What is wrong? Is it my dataset? Is it the tutorial? Is it really “learning” something? How can I check it?
I am testing this tutorial right now from Youtube:
“okay, but I want GPT to perform 10x for my specific use case” - Here is how
I hope it works!
How did you menage to fine tune your model? Do you have any suggestion or tutorial or video, please? Any help would be so appreciated!
Lastly, I hope my question is clear and useful for someone else too!
Thank you so much in advance.
Kind regards,
Matteo