Continual Training on my own checkpoint

I trained llama 3 8b using unsloth libirary with QLoRA I load adapaters and merged model to huggingface hub can I train on top of that model ? if yes how ?

I tried “load_apaters” function then “get_peft_model” again but it dosent seem right