Primer on Fine Tuning Text generation models (like GPT)

Hi! I am new to fine-tuning and was trying to perform a small exercise, where I would like to fine-tune decoder-only models to capture the nuance of a certain domain like articles about finance (domain adaptation). I have a very small dataset of about 500 articles on a domain, and I’d like to fine-tune OPT model on it.

I tried using the default method, as indicated here, HF Fine-tuning script
but the results were not great. I think the model didn’t fine-tune.

Then I researched more about the problem at hand and learnt that there are some “parameter efficient” fine-tuning methods, where we introduce extra layers called “adapters” which are fine-tuned instead, keeping the base pretrained model frozen.

This left me too confused. So I’m asking for help from the wider community. From where should I start learning more about Fine-tuning LLMs for text generation, so that I have a good grasp of the concept since most guides that I came across only tackle Sentiment Analysis.