Wav2vec2 inference on my own model

Greetings, I have trained my own model using a fairseq setup. This process results in a dict.ltr.txt, finutuned.pt, pretrainedd.pt, lexicon.txt and vocab.txt. Also have a kenlm.bin but I understand that’s not yet supported.

My question is this - how do use these files with huggingface? It has it’s own setup with json files etc but I don’t know how to convert my set of files into a huggingface Transformers suitable set of files to do inference.

I haven’t been able to find anyone explaining via tutorial or otherwise if this is possible. Is it?