Fine-tune BERT for Masked Language Modeling

Hello,

I have used a pre-trained BERT model using Hugging Transformers for a project. I would like to know how to “fine-tune” the BERT for Masked Language Modeling for a task like spelling correction. The links “https://github.com/huggingface/transformers/tree/master/examples/lm_finetuning” and “https://github.com/huggingface/transformers/blob/master/examples/lm_finetuning/pregenerate_training_data.py” are not found which seemed to be of great resource. As well as I would also like to know the dataset (like what kind of inputs and labels are to be given to the model) format that BERTForMaskedLM requires to be trained on. I would be grateful if anyone could help me in this regard.

Thanks,
Nes

Interested in this too…

it seems “lm_finetunin” script is not active.
there is this: