I’m drowning in all parameters of whisper’s fine tune and would be happy for help.
I have low disk space (so I want to use as litile as possible number checkpoints as we can)
I want to run whisper fine-tune on my local dataset with the following parameters:
- Number of train epochs = 3
- Batch size = 8
- metric for best model = “wer”
- I will be the best if at any point of the training phase when there is an improvment of the metric (WER), a checkpoing will be saved (and will replace the previous check-point, i.e run over the last save checkpoint)
What are the parameters I need to set in run_speech_recognition_seq2seq for this ?