Whisper V3 finetuning with qlora

I am finetuning whisper v3 (learning_rate: 5e-6) with qlora (r=16) on medical data, both train loss and eval loss is decreasing pretty good(train_loss: 0.22, eval_loss: 0.29).

But during inference, it is giving garbage output like special characters(, ., “”). Even the base model is giving pretty good results, but finetuned model is throwing garbage.

is there are any specific method to finetune Whsiper v3 with qlora?

1 Like