Finetune / retrain wav2vec2 encoder in self-supervised manner

Hi everyone, is there any member function in the wav2vec2ForCTC model that allows us to use unlabeled audio data to fine-tune / retrain the wav2vec2 encoder? I think we should be able to do this since there are so many public self-pretrained models (e.x. facebook/wav2vec2-large-lv60), but I can’t find any related information neither on this forum nor the official documentation.