Weights & Biases supporting Whisper Fine-tuning 🥳

Hey folks!
First off huge thanks to @sanchit-gandhi and @reach-vb for putting this challenge together, really excited to do some language finetuning on my native tongue (Tamil)!

I’m happy to share that the team here at Weights & Biases would like to support the community with their training as much as we can! :partying_face:

We have the Weights & Biases Tables feature that I think is super useful to explore speech datasets.

Here’s a colab - Google Colab) to show off how to instrument your code to log your models training progress as well as upload and version your tokenizers, processors, and models (before logging your best model to the HF Model Hub :slight_smile: ).

The colab also shows how to create a Custom Callback -
Weights & Biases that can be used to monitor samples generated when logging training progress and saving model artifacts to wandb so you can continue training on preemptible instances like colab.

Let us know how integrating and using W&B goes and whether you have any issues! I’ll be active here and in the Hugging Face discord to answer any Weights & Biases-related questions you might have! Just tag me parambharat#5082 on discord or post here with questions.
Best of luck with the challenge everyone!!

In order to help organise multiple people working on the same language, we at Weights & Biases are happy to create public language-specific W&B Projects that anyone will be able to log their results, datasets, models, tokenizers etc. This way folks working on the same language can work as a team and can easily share results and see the configs and hyperparameters that were used for specific model runs. - Here’s the team we can use for the finetuning event.