Loading a previous checkpoint in get_peft_model (whisper large-v2)

I am finetuning whsper large v2 using LORA finetuning, following Huggingface PEFT guide (link) for Urdu (Mozilla Common Voice 17.0). There is additional data in common voice ‘other’ that I want to use for further finetuning. How can I specify a previous check point or pretrained adapter model in get_peft_model instead of giving it openai/whisper-v2-large every time (and hence starting from zero)?

1 Like