Looking for exploratory study / best practices for LoRA adapters config (LLM fine-tuning)

Hello everyone,

I work on a custom fine-tuning process for Llama-2, using LoRA adapters. I’m curious if any best practices have already emerged in the literature regarding setting LoraConfig (this is from the peft library but my question is not library-specific), as well as the optimal positioning and frequency for these adapters within the model.

While I’ve reviewed foundational papers on adapters and LoRA, I’ve encountered only a few suggestions. Yet, given the widespread use of adapters for fine-tuning currently, I guess there should be more expertise available.

Any recommended references?

Thanks in advance for your assistance!