PreTrain Wav2Vec2 in Swedish
There is currently only a multilingually pretrained model for Swedish Wav2Vec2. Let’s make a Wav2Vec2 only pretrained on Swedish.
Model
A randomly initialized Wav2Vec2 model
Datasets
One can make use Common Voice the dataset is also available through the datasets
library here: common_voice · Datasets at Hugging Face.
Available training scripts
FlaxWav2Vec2 will be merged soon: [Flax] Add wav2vec2 by patrickvonplaten · Pull Request #12271 · huggingface/transformers · GitHub and a pretraining script should be relatively easy to be merged.
(Optional) Desired project outcome
The best Swedish ASR model.
(Optional) Challenges
It might make sense to use more data than just common voice.