Finetuning quantised llama-2 with LoRA

Did the finetuned model work when you ran it? Did it actually improve the performance for this dataset compared to the base model? How many steps did you train it for?