How to convert wav2vec2 checkpoint to Huggingface processor and model?

Hi, I have a finetuned, which I trained using fairseq repo’s CLI tools. Now I want to infer this model with Huggingface Transformers library. Like this example from here -

processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")

How can I do that? Any help appreciated. Thanks.


did u figure this out ? i am stuck at the same problem