PreTrain Wav2Vec2 in Swedish

PreTrain Wav2Vec2 in Swedish

There is currently only a multilingually pretrained model for Swedish Wav2Vec2. Let’s make a Wav2Vec2 only pretrained on Swedish.

Model

A randomly initialized Wav2Vec2 model

Datasets

One can make use Common Voice the dataset is also available through the datasets library here: common_voice · Datasets at Hugging Face.

Available training scripts

FlaxWav2Vec2 will be merged soon: [Flax] Add wav2vec2 by patrickvonplaten · Pull Request #12271 · huggingface/transformers · GitHub and a pretraining script should be relatively easy to be merged.

(Optional) Desired project outcome

The best Swedish ASR model.

(Optional) Challenges

It might make sense to use more data than just common voice.

2 Likes

Maybe also interesting for @marma ?

1 Like

It would be interesting, the problem for us is that we cannot share data. However, we have been pretraining a large Wav2Vec2 at the KB for some time and right now, in the middle of training, it (marginally) outperforms the Swedish VoxPopuli. I’ll write something up and publish it before I go on vacation in two-three days. At least the license will be better than CC NonCommercial.

2 Likes

“It” being the model in this case. We’ll detail it in a pre-print at some point.

1 Like