Idle GPU when finetuning whisper tiny

Hello,

I’m trying to use the scripts from the whisper finetune event to finetune whisper tiny.
I use the command and script from the whisper finetuning event. I only change model size from small to tiny.

It works fine, but I have noticed that my GPU stays idle half the time:

Is this expected when streaming the Common Voice dataset ? It looks like the training could go twice as fast.
I tried increasing the number of cpu on my vm from 8 to 32, but it didn’t change anything.
I don’t think the internet speed is to blame, it’s on a google cloud instance.

Did anyone notice something similar ?
Do you have any idea how to fix this issue ?

Cheers

I had the same problem and after some experimentation, I found out it may be because the data is not fed enough to GPU and it starves, probably because the tiny model does not have many parameters and finishes the calculations quickly (RTX-3090 here).

Although it did not fill the GPU fully, setting the dataloader_num_workers to virtual core count (I have a 6/12 CPU, so I set it to 12) helped a lot. I had many other changes like sharding etc, but I think this is the main parameter that helped.

1 Like