Hello guys, I’m using transformers library and I want to build speech recognition systems base on wav2vec 2.0
I have different problems
-
Base on the exemple here Wav2Vec2 — transformers 4.7.0 documentation
I have tried to pretrain a model. But I have seen in the github on fairseq wav2vec that the loss object at the end, I can call a backward method on, but here I can. -
how can I perform batch training in wav2vec.
Guys please I need help, even another documentation or examples.